The coding aspect is novel I'll admit, and something an audience may find interesting, but I've yet to hear any examples of live coded music (or even coded music) that I'd actually want to listen to. They almost always take the form of some bog-standard house music or techno, which I don't find that enjoyable.
Additionally, the technique is fun for demonstrating how sound synthesis works (like in the OP article), but anything more complex or nuanced is never explored or attempted. Sequencing a nuanced instrumental part (or multiple) requires a lot of moment-to-moment detail, dynamics, and variation. Something that is tedious to sequence and simply doesn't play to this formats' strengths.
So again, I want to integrate this skill into my music production tool set, but aside from the novelty of coding live, it doesn't appear well-suited to making interesting music in real time. And for offline sequencing there are better, more sophisticated tools, like DAWs or trackers.
Consider this: there are teenagers today, out there somewhere, learning to code music. Remember when synthesisers were young and cool and there was an explosion of different engines and implementations?
This is happening for the kids, again.
Try to use this new technology to replicate the modern, and then the old sound, and then discover new sounds. Like we synth nerds have been doing for decades.
Pro developers who really care about the sound variously write in C/C++ or use cross compilers for pd or Max. High quality oscillators, filters, reverb etc are hard work, although you can certainly get very good results with basic ones given today's fast processors.
Live coding is better for conditionals like 'every time [note] is played increment [counter], when [counter] > 15 reset [counter] to 0 and trigger [something else]'. But people who are focused on the result rather than the live coding performance tend to either make their own custom tooling (Autechre) or programmable Eurorack modules that integrate into a larger setup, eg https://www.perfectcircuit.com/signal/the-programmable-euror...
It's not that you can't get great musical results via coding, of course you can. But coding as performance is a celebration of the repl, not of the music per se.
Years ago I went to a sci-fi convention for the first time, because I had moved to a new town and didn't know anyone, and I like sci-fi. I realized when I was there that despite me growing up reading Hugo and Nebula award winners, despite watching pretty much every sci-fi show on TV, despite being a full-time computer nerd, the folks who go to sci-fi conventions are a whole nother subculture again. They have their own heroes, their own in-jokes, their own jargon... and even their own form of music! It's made by people in the community for the community and it misses the point to judge it by "objective" standards from the outside, because it's not about trying to make "interesting music" or write the best song of all time. The music made in that context is not being made as an end in itself, or even as the focus of the event, it's just a mechanism to enable a quirky subculture to hang out and bond in a way that's fun for them. I see this kind of live coded music as fulfilling a similar role in a different subculture. Maybe it's not for you, but that's fine.
I think this format of composition is going to encourage a highly repetitive structure to your music. Good programming languages constrain and prevent the construction of bad programs. Applying that to music is effectively going to give you quantization of every dimension of composition.
I'm sure its possible to break out of that but you are fighting an uphill battle.
It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
I think I must not be expressing myself well. These tools seem to be optimized for parametric pattern manipulation. You essentially declare patterns, apply transformations to them, and then play them back in loops. The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.
Again, I'm not trying to critique the styles of music that lend themselves well to these tools.
> And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.
Looking at the examples in their documentation, all I see are examples like:
d1 $ sound "[[bd [bd bd bd bd]] bd sn:5] [bd sn:3]"
So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?> It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
I think Pandoc or Shellcheck would win on this metric.
"Computer games don't affect kids. If Pac Man affected us as kids, we'd all be running around in darkened rooms, munching pills and listening to repetitive music." -- Marcus Brigstocke (probably?)
Also, related but not - YouTube's algorithm gave me this the other day - showing how to reconstruct the beat of Blue Monday by New Order:
My sister likes to work with [checks notes carefully to avoid the wrong words] old textiles. This of course constrains the kind of art she can make. That's the whole point.
I see live coding the same way as the harp, or a loop sampler, an instrument, one of an enormous variety of tools which you might find suits you or not. As performance I actually enjoy live coding far more than most ways to make music, although I thought Amon Tobin's ISAM Live was amazing that's because of the visuals.
Aside from the novelty factor (due to very different UI/UX) and the idea that you can use generative code to make music (which became an even more interesting factor in the age of LLMs), I agree.
And even the generative code part I mentioned is a novelty factor as well, and isn't really practical for someone who actually makes music as their end-goal (and not someone who is just experimenting around with tech or how far one can get with music-as-code UIUX).
Of course, often creativity comes from limitations. I would agree that it's usually not desirable to go full procedural generation, especially when you want to wrangle something into the structure of a song. I think the best approach is a hybrid one, where procedural generation is used to generate certain ideas and sounds, and then those are brought into a more traditional DAW-like environment.
Sure it might be cool to use cellular automata to generate rhythms, or pick notes from a diatonic scale, or modulate signals, but without a rhyme or reason or _very_ tight constraints the music - more often than not - ends up feeling unfocused and meandering.
These methods may be able to generate a bar or two of compelling material, but it's hard to write long musical "sentences" or "paragraphs" that have an arc and intention to them. Or where the individual voices are complementing and supporting one another as they drive towards a common effect.
A great deal of compelling music comes from riding the tightrope between repetition and surprising deviations from that scheme. This quality is (for now) very hard to formalize with rules or algorithms. It's a largely intuitive process and is a big part of being a compelling writer.
I think the most effective music comes from the composer having a clear idea of where they are going musically and then using the tools to supplement that vision. Not allowing them to generate and steer for you.
-----
As an aside, I watch a lot of Youtube tutorials in which electronic music producers create elaborate modulation sources or Max patches that generate rhythms and melodies for them. A recurring theme in many of these videos is an approach of "let's throw everything at the wall, generate a lot of unfocused material, and then winnow it down and edit it into something cool!" This feels fundamentally backwards to me. I understand why it's exciting and cool when you're starting out, but I think the best music still comes from having a strong grasp of the musical fundamentals, a big imagination, and the technical ability to render it with your tools and instruments.
----
To your final point, I think the best example of this hybrid generative approach you're describing are Autechre. They're really out on the cutting edge and carving their own path. Their music is probably quite alienating because it largely forsakes melody and harmony. Instead it's all rhythm and timbre. I think they're a positive example of what generative music could be. They're controlling parameters on the macro level. They're not dictating every note. Instead they appear to be wrangling and modulating probabilities in a very active way. It's exciting stuff.
There's a learning curve for sure, but it's not too bad once you learn the basics of how audio and MIDI are handled + general JUCE application structure.
Two tips:
Don't bother with the Projucer, use the CMAke example to get going. Especially if you don't use XCode or Visual Studio.
If your on a Mac, you might need to self-sign the VST. I don't remember the exact process, but it's something I had to do once I got an M4 Mac.
LLMs have absolutely killed any interest I use to have in the max/pd/reaktor wiring up boxes UI.
I have really gone further though and thought why do I even care about VST or a DAW or anything like this? Why not break completely free of everything?
I take inspiration from Trevor Wishart and the Composers Desktop Project for this. Wishart's music could only really be made with his own tools.
It is easy to sound original when using a tool no one else has.
If you see it as yet another instrument you have to master, then you can go pretty far. I'm finding myself exploring rhythms and sounds in ways I could never do in a DAW so fast, but at the same time I do find limiting a lot of factors, especially sequencing.
So far I haven't gotten beyond a good sounding loop, hence the name "loopmaster", and maybe that's the limit, which is why I made a 2 deck "dual" mode in the editor, so that it can be played as a DJ set where you don't really need that much progression.
That said, it's quite fun to play with it and experiment with sounds, and whenever you make something you enjoy, you can export a certain length and use it as a track in your mix.
My goal is certainly to be able to create full length tracks with nuances and variations as you say, just not entirely sure how to integrate this into the format right now.
Feedback[0] is appreciated!
[1] https://news.ycombinator.com/item?id=46052478 [2] Nice example: https://m.youtube.com/watch?v=GWXCCBsOMSg
https://en.wikipedia.org/wiki/Musikalisches_W%C3%BCrfelspiel
I must say the narrated trance piece by switch angel blew me socks right off, to me feels like this should be a genre in itself.
The tools/frameworks have become more plentiful, approachable, and mature over the past 10-15 years, to the point where you can just go to strudel.cc and start coding music right from your browser.
I'll shamelessly plug my weirdo version in a Forth variant, also a house loop running in the browser: https://audiomasher.org/patch/WRZXQH
Well, maybe it's closer to trance than house. It's also considerably more esoteric and less commented! Win-win?
fun experiment to get you tinkerers started, skip to the bottom play The Complete Loop - https://loopmaster.xyz/tutorials/how-to-synthesize-a-house-l...
Then, on line 21, with `pat('[~ e3a3c4]*4',(trig,velocity,pitches)->`.
Change *4 to *2 and back to *4, to reduce the interval that the "Chords" play. If you do it real fast with your backspace + 2 or backspace + 4 key, you can change the chords in realtime, and kinda vibe with the beat a little bit.
Definitely recommend wearing headphones to hear the entire audio spectrum (aka bass).*
change line 12 from 8000 to 800
For now you can see how it's done here[0] on line 139. I pretty much use it on every other track I've made as well.
[0]: https://loopmaster.xyz/loop/6221a807-9658-4ea0-bfec-8925ccf8...
Also, there is an AI DJ mode[0] where you set the mood/genre and the AI generates and plays music for you infinitely.
I don't imagine making a full song out of this, but it would be a great instrument to have.
I'll put 50$ down right now.
[0]: https://loopmaster.xyz/loop/75a00008-2788-44a5-8f82-ae854e87...
The janky way to do this would be to run it locally, and setup a watch job to reload the audio file into a vst plugin every time the file changes.
The license at: https://github.com/juce-framework/JUCE/blob/master/LICENSE.m...
indicates you can just license any module under agpl and avoid the JUCE 8 license (which to be fair, I'm not bothering to read)
And sure you can license under APGL. It should be obvious that's undesirable.
I'm not going to test it, but couldn't you just load a json file with all params.
Various instructions, etc.
I can't believe it's not code!
I like how music recognition flags it as the original Jarre piece.
I first did stuff like this when I was a teen using a 6502 machine and a synth card - using white noise to make tshhh snares etc. All coded in 6502. The bible was Hal Chamberlin's Musical Application of Microprocessors.
Then of course we had games abusing the SID etc to make fantastic tunes and then came very procedural music in size coded PC and Amiga demo coding that underneath the hood were doing tiny synth work and sequencing very much like dittytoy etc.
Shadertoy even has procedural audio but it doesn't get used enough.
Fantastic to experience all of this!
Not like a fringe unknown one, but one with over 20 years of history and now-owned by Beatport.