Posted by surprisetalk 1 day ago
This allowed the device to count the beats, and since most modern EDM music is 4/4 that means you can trigger effects every time something "changes" in the music after synching once.
The classic "Color Organ" from the 70's.
- The more filters I added the worse it got. A simple EMA with smoothing gave the best results. Although, your pipeline looks way better than what I came up with!
- I ended up using the Teensy 4.0 which let me do real time FFT and post processing in less than 10ms (I want to say it was ~1ms but I can't recall; it's been a while). If anyone goes down this path I'd heavily recommend checking out the teensy. It removes the need for a raspi or computer. Plus, Paul is an absolute genius and his work is beyond amazing [1].
- I started out with non-addressable LEDs also. I attempted to switch to WS2812's as well, but couldn't find a decent algorithm to make it look good. Yours came out really well! Kudos.
- Putting the leds inside of an LED strip diffuser channel made the biggest difference. I spent so long trying to smooth it out getting it to look good when a simple diffuser was all I needed (I love the paper diffuser you made).
RE: What's Still Missing: I came to a similar conclusion as well. Manually programmed animation sequences are unparalleled. I worked as a stagehand in college and saw what went into their shows. It was insane. I think the only way to have that same WOW factor is via pre-processing. I worked on this before AI was feasible, but if I were to take another stab at it I would attempt to do it with something like TinyML. I don't think real time is possible with this approach. Although, maybe you could buffer the audio with a slight delay? I know what I'll be doing this weekend... lol.
Again, great work. To those who also go down this rabbit hole: good luck.
To solve this I tried pre-processing the audio, which only works with recordings obviously. I extract the beats and the chords (using Chordify). I made a basic animation and pulsed the lights to the beat, and mapped the chords to different color palettes.
Some friends and I rushed it to put it together as a Burning Man art project and it wasn't perfect, but by the time we launched it felt a lot closer to what I'd imagined. Here's a grainy video of it working at Burning Man: https://www.youtube.com/watch?v=sXVZhv_Xi0I
It works pretty well with most songs that you pick. Just saying there's another way to go somewhere between (1) fully reactive to live audio, and (2) hand designed animations.
I don't think there's an easy bridge to make it work with live audio though unfortunately.
I think its more likely going to come from a direct integration with existing synthesis methods, but .. I’m kind of biased when it comes to audio and light synthesizers, having made a few of each…
We have addressed this expert tuning issue with the MagicShifter, which is a product not quite competing with the OP’s work, but very much aligned with it[1]:
.. which is a very fun little light synthesizer capable of POV rendering, in-air text effects, light sequencer programming, MIDI, and so on .. plus, has a 6dof sensor enabling some degree of magnetometers, accelerometers, touch-sensing and so on .. so you can use it for a lot of great things. We have a mode “BEAT” that you can place on a speaker and get reactive LED strips of a form (quite functional) pretty much micro-mechanically, as in: through the case and thus the sensor, not an ADAC, not processing audio - but the levers in between the sensor and the audio source. So - not quite the same, but functionally equivalent in the long-rung (plus the magicshifter is battery powered and pocketable, and you can paint your own POV images and so on, but .. whatever..)
The thing is, the limits: yes, there are limits - but like all instruments you need to tune to/from/with those limits. It’s not so much that achieving perfect audio reactive LED’s is diabolically hard, but rather making aesthetically/functionally relevant decisions about when to accept those limits requires a bit of gumption.
Humans can be very forgiving with LED/light-based interfaces, if you stack things right. The aesthetics of the thing can go a long way towards providing a great user experience .. and in fact, is important to giving it.
[1] - (okay, you can power a few meters of LED strips with a single MagicShifter, so maybe it is ‘competition’, but whatever..)
I get a cert mismatch on that site, and when clicking the shop link I end up at https://hackerspaceshop.com/ which is advertising an online fax service.
I haven't seen that done yet. I think it's one of those Dryland myths.
There’s plenty of visual experiments of pianists doing this “rock band” “guitar hero” style visualization of notes.
That can be done with analog electronics, but even half an analog vocoder needs a lot of parts. It's going to be cheaper and more reliable to simulate it in software. This uses entirely IIR filters, which are computationally cheap and calculated one sample at a time, so they have the minimum possible latency. I'd be curious if any LLM actually recognizes that an audio visualizer is half a vocoder instead of jumping straight to the obvious (and higher latency) FFT approach.