Top
Best
New

Posted by surprisetalk 1 day ago

Audio Reactive LED Strips Are Diabolically Hard(scottlawsonbc.com)
168 points | 53 commentspage 2
panki27 8 hours ago|
Had a similar setup based on an Arduino, 3 hardware filters (highs/mids/lows) for audio and a serial connection. Serial was used to read the MIDI clock from a DJ software.

This allowed the device to count the beats, and since most modern EDM music is 4/4 that means you can trigger effects every time something "changes" in the music after synching once.

JKCalhoun 7 hours ago|
"3 hardware filters…"

The classic "Color Organ" from the 70's.

copypaper 5 hours ago||
This is awesome! I did a similar project in college for one of my classes and ran into the same exact walls as you.

- The more filters I added the worse it got. A simple EMA with smoothing gave the best results. Although, your pipeline looks way better than what I came up with!

- I ended up using the Teensy 4.0 which let me do real time FFT and post processing in less than 10ms (I want to say it was ~1ms but I can't recall; it's been a while). If anyone goes down this path I'd heavily recommend checking out the teensy. It removes the need for a raspi or computer. Plus, Paul is an absolute genius and his work is beyond amazing [1].

- I started out with non-addressable LEDs also. I attempted to switch to WS2812's as well, but couldn't find a decent algorithm to make it look good. Yours came out really well! Kudos.

- Putting the leds inside of an LED strip diffuser channel made the biggest difference. I spent so long trying to smooth it out getting it to look good when a simple diffuser was all I needed (I love the paper diffuser you made).

RE: What's Still Missing: I came to a similar conclusion as well. Manually programmed animation sequences are unparalleled. I worked as a stagehand in college and saw what went into their shows. It was insane. I think the only way to have that same WOW factor is via pre-processing. I worked on this before AI was feasible, but if I were to take another stab at it I would attempt to do it with something like TinyML. I don't think real time is possible with this approach. Although, maybe you could buffer the audio with a slight delay? I know what I'll be doing this weekend... lol.

Again, great work. To those who also go down this rabbit hole: good luck.

[1]: https://www.pjrc.com/

nsedlet 3 hours ago||
I also attempted to do real-time audio visualizations with LED strips. What was unsatisfying is that the net effect always seemed to be: the thing would light up with heavy beats and general volume. But otherwise the visual didn't FEEL like the music. This is the same issue I always had with the Winamp visualizations back in the day.

To solve this I tried pre-processing the audio, which only works with recordings obviously. I extract the beats and the chords (using Chordify). I made a basic animation and pulsed the lights to the beat, and mapped the chords to different color palettes.

Some friends and I rushed it to put it together as a Burning Man art project and it wasn't perfect, but by the time we launched it felt a lot closer to what I'd imagined. Here's a grainy video of it working at Burning Man: https://www.youtube.com/watch?v=sXVZhv_Xi0I

It works pretty well with most songs that you pick. Just saying there's another way to go somewhere between (1) fully reactive to live audio, and (2) hand designed animations.

I don't think there's an easy bridge to make it work with live audio though unfortunately.

MomsAVoxell 2 hours ago||
> I think the future of audio visualization on LED strips will involve a mixture of experts tuned for different genres, likely using neural networks.

I think its more likely going to come from a direct integration with existing synthesis methods, but .. I’m kind of biased when it comes to audio and light synthesizers, having made a few of each…

We have addressed this expert tuning issue with the MagicShifter, which is a product not quite competing with the OP’s work, but very much aligned with it[1]:

https://magicshifter.net/

.. which is a very fun little light synthesizer capable of POV rendering, in-air text effects, light sequencer programming, MIDI, and so on .. plus, has a 6dof sensor enabling some degree of magnetometers, accelerometers, touch-sensing and so on .. so you can use it for a lot of great things. We have a mode “BEAT” that you can place on a speaker and get reactive LED strips of a form (quite functional) pretty much micro-mechanically, as in: through the case and thus the sensor, not an ADAC, not processing audio - but the levers in between the sensor and the audio source. So - not quite the same, but functionally equivalent in the long-rung (plus the magicshifter is battery powered and pocketable, and you can paint your own POV images and so on, but .. whatever..)

The thing is, the limits: yes, there are limits - but like all instruments you need to tune to/from/with those limits. It’s not so much that achieving perfect audio reactive LED’s is diabolically hard, but rather making aesthetically/functionally relevant decisions about when to accept those limits requires a bit of gumption.

Humans can be very forgiving with LED/light-based interfaces, if you stack things right. The aesthetics of the thing can go a long way towards providing a great user experience .. and in fact, is important to giving it.

[1] - (okay, you can power a few meters of LED strips with a single MagicShifter, so maybe it is ‘competition’, but whatever..)

itintheory 1 hour ago|
> https://magicshifter.net/

I get a cert mismatch on that site, and when clicking the shop link I end up at https://hackerspaceshop.com/ which is advertising an online fax service.

wolvoleo 6 hours ago||
Thanks for this! Exactly the thing I'm struggling with now. Making decent visualisation for music based on ESP32-S3.
serf 3 hours ago||
the hard part is dousing a room in pulsing bright colorful LEDs tastefully.

I haven't seen that done yet. I think it's one of those Dryland myths.

blobbers 2 hours ago||
Am I the only one who was surprised the obvious answer is to map frequencies to notes and basically turn your LED strip into a piano visualization? Then just norm to strip size?

There’s plenty of visual experiments of pianists doing this “rock band” “guitar hero” style visualization of notes.

8cvor6j844qw_d6 8 hours ago||
Are these available commercially for consumers?
leptons 3 hours ago|
There are plenty of LED strips with audio controllers that work pretty well. I've used them in a few projects. Just go look at Amazon, you can get them for pretty cheap.
p0w3n3d 9 hours ago||
IANAE but I would go for electric circuit, not electronic software that steers the led. I think that nowadays, with the LLM support it can be easier and better to optimise it for the sake of latency.
mrob 8 hours ago||
If you want minimum latency, you want the input side of an traditional vocoder, not an FFT. This is the part that splits the modulator signal into frequency bands and puts each one through an envelope follower. Instead of using the outputs of the envelope followers to modulate the equivalent frequency bands of a carrier signal, you can use them to drive the visualizer circuit.

That can be done with analog electronics, but even half an analog vocoder needs a lot of parts. It's going to be cheaper and more reliable to simulate it in software. This uses entirely IIR filters, which are computationally cheap and calculated one sample at a time, so they have the minimum possible latency. I'd be curious if any LLM actually recognizes that an audio visualizer is half a vocoder instead of jumping straight to the obvious (and higher latency) FFT approach.

avisser 7 hours ago||
For recorded music, you could always buffer however many milliseconds of audio to account for the processing.
IshKebab 5 hours ago|
It's not that hard. I did a real-time version of the Beatroot algorithm decades ago that worked pretty well for being such a simple algorithm.
More comments...