Top
Best
New

Posted by surprisetalk 1 day ago

Audio Reactive LED Strips Are Diabolically Hard(scottlawsonbc.com)
154 points | 50 comments
doctorhandshake 5 hours ago|
I like this writeup but I feel like the title doesn't really tell you what it's about ... to me it's about creativity within constraints.

The author finds, as many do, that naive or first-approximation approaches fail within certain constraints and that more complex methods are necessary to achieve simplicity. He finds, as I have, that perceptual and spectral domains are a better space to work in for things that are perceptual and spectral than in the raw data.

What I don't see him get to (might be the next blog post, IDK), is getting into constraints in the use of color - everything is in 'rainbow town' as we say, and it's there that things get chewy.

I'm personally not a fan of emissive green LED light in social spaces. I think it looks terrible and makes people look terrible. Just a personal thing, but putting it into practice with these sorts of systems is challenging as it results in spectral discontinuities and immediately requires the use of more sophisticated color systems.

I'm also about maximum restraint in these systems - if they have flashy tricks, I feel they should do them very very rarely and instead have durational and/or stochastic behavior that keeps a lot in reserve and rewards closer inspection.

I put all this stuff into practice in a permanent audio-reactive LED installation at a food hall/ nightclub in Boulder: https://hardwork.party/rosetta-hall-2019/

scottlawson 4 hours ago||
I didn't go into much detail about it but there's a whole rabbit hole of color theory and color models. For example, the spectrum effect assigns different colors to different frequency bins, but also adjusts the assignment over time to avoid a static looking effect. It does this by rotating a "color angle" kind of like the HSL model.

I really like your LED installation in Rosetta Hall, it looks beautiful!

doctorhandshake 4 hours ago||
Thanks! Great article - would like to read one about the color rabbit hole pls ;)
PaulHoule 4 hours ago||
Yeah, "diabolical" overstates it. It isn't a wicked problem

https://en.wikipedia.org/wiki/Wicked_problem

Kinda funny but I am a fan of green LED light to supplement natural light on hot summer days. I can feel the radiant heat from LED lights on my bare skin and since the human eye is most sensitive to green light I feel the most comfortable with my LED strip set to (0,255,0)

scottlawson 4 hours ago||
I'd actually argue it has some wicked problem characteristics. The input space is enormous (all possible audio), perception is subjective and nonlinear, and there's no objective function to optimize against, only "does this feel right?". Every solution you try reframes what "good" means. It's not as hard as social planning but is way harder than it sounds, no pun intended.
PaulHoule 3 hours ago|||
Ever seen https://www.youtube.com/watch?v=oNyXYPhnUIs ? There are a lot of things people might think feels right.

(Note both the scanner in front of KITT and the visual FX on his dashboard when he speaks, which changes from season to season.)

fragmede 3 hours ago|||
fta: The biggest unsolved problem is making it work well on all kinds of music.

The wickedness comes from wanting something that works just as well for John Summit as the Grateful Dead as Mozart and Bad Bunny.

But it seems like you could cheat for installations where the type of music is known and go from there. The other cheat is to have a "tap" button, and to pull that data and go from there.

mental note: the thought "it can't be that hard" when obviously it is sent me down a rabbit hole for a couple of hours

WarmWash 4 hours ago||
The real killer is that humans don't hear frequencies, they hear instruments, which are a stack of frequencies that roughly sometimes correlate with a frequency range.

I wonder if transformer tech is close to achieving real-time audio decoding, where you can split a track into it's component instruments, and light show off of that. Think those fancy Christmas time front yard light shows as opposed to random colors kind of blinking with what maybe is a beat.

adzm 3 hours ago|
real time audio stem separation is already possible, some specific models can even get around 20ms latency (HS-TasNet) https://github.com/lucidrains/HS-TasNet

There was a nice paper with an overview last year too https://arxiv.org/html/2511.13146v1 that introduced RT-STT which is still being tweaked and built upon in the MSS scene

The high quality ones like MDXNet and Demucs usually have at least several seconds of latency though, but for something like displaying visuals high quality is not really needed and the real time approaches should be fine.

omneity 1 hour ago||
I'm pretty sure it should be possible to distill HS-TasNet into a version approximate and fast enough for the purpose of animating LEDs.

At the end it's "just" chunking streamed audio into windows and predicting which LEDs a window should activate. One can build a complex non-realtime pipeline, generate high-quality training data with it, and then train a much smaller model (maybe even an MLP) with it to predict just this task.

iamjackg 5 hours ago||
Scott's work is amazing.

Another related project that builds on a similar foundation: https://github.com/ledfx/ledfx

aleksiy123 2 hours ago||
Fun I actually did a similar project during my time at UVic 10 years ago but it was a hoodie.

https://youtu.be/-LMZxSWGLSQ

I remember thinking really hard on what to do with color. Except like you say mine is pretty much a naive fft.

https://github.com/aleksiy325/PiSpectrumHoodie?tab=readme-ov...

Thanks for reminding me.

mdrzn 7 hours ago||
Always been very interested in audio-reactive led strips or led bulbs, I've been using a Windows app to control my LIFX lights for years but lately it hasn't been maintained and it won't connect to my lights anymore.

I tried recreating the app (and I can connect via BT to the lights) but writing the audio-reactive code was the hardest part (and I still haven't managed to figure out a good rule of thumb or something). I mainly use it when listening to EDM or club music, so it's always a classic 4/4 110-130bpm signature, yet it's hard to have the lights react on beat.

menno-dot-ai 5 hours ago||
Woow, this was my first hardware project right around the time it released! I remember stapling a bunch of LED strips around our common room and creating a case for the pi + power supply by drilling a bunch of ventilation + cable holes in a wooden box.

And of course, by the time I got it to work perfectly I never looked at it again. As is tradition.

scottlawson 4 hours ago|
That's awesome to hear! Sometimes the journey is the destination, its a great project to get started with electronics.
rustyhancock 7 hours ago||
More than 20 years ago or so I made a small LED display that used a series of LM567 (frequency detection ICs) and LM3914 (bar chart drivers) to make a simple histogram for music.

It was fiddly, and probably too inaccurate for a modern audience but I can't claim it was diabolically hard. Tuning was a faff but we were more willing to sit and tweak resistor and capacitor values then.

cwillu 32 minutes ago|
That would be “The Naive FFT”:

“Most people who attempt audio reactive LED strips end up somewhere around here, with a naive FFT method. It works well enough on a screen, where you have millions of pixels and can display a full spectrogram with plenty of room for detail. But on 144 LEDs, the limitations are brutal. On an LED strip, you can't afford to "waste" any pixels and the features you display need to be more perceptually meaningful.”

JKCalhoun 6 hours ago||
I made a decent audio visualizer using the MSGEQ7 [1]. It buckets a count for seven audio frequency ranges—an Arduino would poll on every loop. It looks like the MSGEQ7 is not a standard part any longer unfortunately.

(And it looks like the 7 frequencies are not distributed linearly—perhaps closer to the mel scale.)

I tried using one of the FFT libraries on the Arduino directly but had no luck. The MSGEQ7 chip is nice.

[1] https://cdn.sparkfun.com/assets/d/4/6/0/c/MSGEQ7.pdf

empyrrhicist 5 hours ago|
Have you ever seen anything like a MSGEQ14 or equivalent? It would be cool to go beyond 7 in such a simple-to-use chip, but I haven't seen one.
milleramp 4 hours ago||
This guy has been making music controlled LED items, boxes and wrist bands. https://www.kickstarter.com/projects/markusloeffler/lumiband...
londons_explore 6 hours ago|
The mel spectrum is the first part of a speech recognition pipeline...

But perhaps you'd get better results if more of a ML speech/audio recognition pipeline were included?

Eg. the pipeline could separate out drum beats from piano notes, and present them differently in the visualization?

An autoencoder network trained to minimize perceptual reconstruction loss would probably have the most 'interesting' information at the bottleneck, so that's the layer I'd feed into my LED strip.

akhudek 1 hour ago||
I've done this in my own solution in this space (https://thundergroove.com). I use a realtime beat detection neural network combined with similar frequency spectrum analyses to provide a set of signals that effects can use.

Effects themselves are written in embedded Javascript and can be layered a bit like photoshop. Currently it only supports driving nanoleaf and wled fixtures, though wled gives you a huge range of options. The effect language is fully exposed so you can easily write your own effects against the real-time audio signals.

It isn't open source though, and still needs better onboarding and tutorials. Currently it's completely free, haven't really decided on if I want to bother trying to monetize any of it. If I were to it would probably just be for DMX and maybe midi support. Or maybe just for an ecosystem of portable hardware.

calibas 5 hours ago||
I was playing around with this recently, but the problem I encountered is that most AI analysis techniques like stem separation aren't built to work in real-time.
More comments...