Another was a set of songs that helped me emotionally regulate on the drive home after couples therapy. The lyrics contained grounding exercises that helped maintain awareness and presence and contained mindfulness practices.
Both did their job, but they were also music for utility, not necessarily for artistic enjoyment. So it's not entirely an apples to apples comparison.
By the time I finish writing this comment - yours is 10 minutes old - someone will have vibe coded one, probably.
Also feels like an easy feature for someone like Suno to add, to help subscription retention.
But something like NotebookLM emphasizing subtle mnemonic devices set to music..
Also, probably someone will game an algorithm to get revenue from a bajillion tracks of lofi slop.
Slop is starting to dominate uploads to some music services, so I think it will only get worse from here
I find that people who rush to negative judgement of LLM-generated art are not going far enough in the creative process to properly judge just how much juice there is to be squeezed out of those 50-billion-dimensional spaces.
Before the purchase, the quality of generations had been going down for a while (IMO; subjective and anecdotal). I tested multiple iterations of their chat interface and was never thrilled with its ability to actually understand or adhere to prompts. I had liked their previous (Suno/Udio-like) iteration (Riffusion).
Curious to hear how it performs for people now and whether anything has improved.
The workflow feels wrong. it should be closer to a DAW with chat, where the model outputs stems, samples and arrangement parts instead of one finished track. Then you could target a specific sound, section or idea and actually develop it.
They could attempt messy stem splitting like all of the other tools have done for a few years now, but those aren't really usable in a production setting beyond small samples you were already going to chop/distort.
I especially love the glitchy ui sounds, although I suspect it's hardly intentional.
"solo banjo instrumental, strictly no other instruments" ... ten seconds later: drums, a fiddle, and a guitar join in.
I could understand if this was an API that people built products around, but it seems to be geared directly at consumers.
How many iterations (arrangements and recordings) do you think a typical Billboard pop song goes through before it's ready for a final mix and mastering?
Go find a YouTube of someone doing this work, it is kind of mind blowing. Given how expensive studio time is, you realize why it costs so much for a popular artist to produce a polished album.
Odds are for every 200 ai songs you generate , 2 or 3 are decent.
Anyway. UMG will probably force you to sign over training rights in future record deals.
The models still can't rap. Sounds like if you asked someone who didn't know what rap was to read a script