Posted by gmays 6 days ago
Even in this case, they're choosing the easy path (plucked, pizzicato), but the human/instrument interface is still audibly oversimplified while the resonant body has an unnecessary amount of "realism". The sound of pizzicato has a distinct character because the player's finger/skin slides a bit on the string as they're plucking, among other factors, which sounds like it's missing here. This can be tricky to implement because it's not necessarily a one-way impulse. The string is already vibrating and affects the finger, hence "interface".
This applies 10x more with bowed strings.
If your model doesn't sound like someone's strangling a cat then it's probably not realistic.
the real sound comes when its played with other instruments in concert. it doesnt need years of practice it needs patience the right setting and an extra joint on ur pinky :p
It absolutely requires years of practice to play the violin at an expert level. This is well documented.
> the real sound comes when its played with other instruments in concert
So in the violin repertoire, the Bach sonatas and partitas for unaccompanied violin, the Ysaÿe sonatas and the Paganini caprices are what, not real?
So I don't know if your criticism makes much sense.
you mention a few details theres so many more if you think about it..the human-instrument interaction has all sorts of imperfections.
tension in shoulders can make u bend the neck a bit. tension in fingers too much might pull out of tune. pushing not 100% straight along the bow might shift it sideways a bit changing how it crosses strings. Then ofc at what position is the bow on the strings (closer/futher from bridge).
humans are not perfect machines but in those imperfection lies the beauty. A perfectly played instrument is played by a human and has this 'humanization' across all areas where human and instrument and music itself interact imho.
if you produce music digitally this instantly will show, because all your instruments will sound flat and boring if you dont humanize.
I suppose this is the innovative part. They're not simulating just the string, but also the fluid it's immersed in, which is a computationally hard problem.
I made a vibrating string simulator in college for our Numerical Methods course and for quite a while I couldn't understand why it sounded so bad.
Turns out rounding errors in floating point operations can propagate to a point where they produce this distinct, "metallic" sound.
They're incredibly small, but if your system of differential equations is large enough, they'll become noticeable. Switching to an algorithm with better numerical stability would probably mitigate this issue, but I didn't get that far with my project.
Reminds me of a Karplus-Strong synthesis implementation that produced a gorgeous guitar/mandolin sound, but only for delay durations that weren't simple ratios with the given sample rate. The simple-ratio durations would end up sounding like crude, attenuated periods of noise-- metallic sounds like you'd expect from a pitch produced in a KSS demo. Everything else had some kind of subtle interpolation error that ended up shaping the noise just enough to make it sound like a million bucks.
The problem with most KSS is that the filter used will typically saturate the timbre. So rather than hearing a guitar string, you're hearing a guitar-adjacent interpolation scheme whose prominence makes you wonder just how un-guitarlike the original unfiltered sound must have been.
Julius Smith wrote pretty comprehensive textbook on the subject of building physical models of musical instruments, available online. Here, for example, is a chapter on modeling bowed string sounds: https://ccrma.stanford.edu/~jos/pasp/Bowed_Strings.html
From the article:
> As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.
And in consumer products for 20+. Pianoteq [0], which is awesome, was first released in 2006.
Also Audio Modeling has been in the business of creating physically modeled virtual instruments, including the violin (under the SWAM series), for a while now as well. You can do pretty fun things like map a USB breath controller to bow pressure, etc.
It's much more difficult to use, though - you have to control lots of aspects of the simulation (using automation in DAW or MIDI controllers) to make it sound actually realistic.
OK I guess it seems like this is more of a tool for luthiers than for composers or music producers.
I currently use a raspberry pi with Pianoteq as sound output for my digital piano. It got a reluctant stamp of approval from my pianist son, although of course he prefers the physical response of even a poor acoustic piano.
The combination of pianoteq and a sample based piano is pretty nice too, though tough to do on a Pi.
Good speakers improve the experience because you get your room resonance etc.
The coolest thing - you can change temperament. So if you are playing music from before equal temperament, you can hear what different keys used to sound like! Very interesting especially with Bach.
I agree with your son, there is nothing like a real piano. There are interesting attempts at combining the digital and mechanical with soundboard transducers from Kawai and Yamaha, I haven't used them but I would like to.
90s physical modelling was a very simplified modular kind of modelling. Instead of analogue oscillators and filters you had "string" models, "pipe" models, various resonators, and so on.
The models were interesting, but still quite crude and basic.
This project is the most physical kind of physical modelling. It's an unsimplified brute-force model of the entire instrument body and string system, in full.
It doesn't try to "model a resonator", it models blocks of wood with various holes, and calculates how they distort and radiate as sound passes through them.
It's ridiculously expensive computationally, but it's also the only way to get all of the nuances of the sound.
I expect they're already working on a stick-slip model for bowing.
Theoretically you could use the same technique to model a piano or guitar, and you would get something indistinguishable from a real instrument.
You'd likely need a supercomputer to run the model in anything approaching real time.
But the advantage is that once you've got it you can do insane things like replace the strings with wood instead of metal, or use different metals, or "build" nonphysical pianos that are fifty feet long and have linear overtones all the way down to the bass.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano. Maybe there's some insane level of hi-fi gear which would let me, idk? But in general, when it's good enough for Steinway, Petrof and my conservatory student son to give their stamp of approval, I think it's good enough for me as well :) quite a few of those insane things you mention you can already do with pianoteq's physical model (i.e. emulating a 20m grand), and I suspect they keep a few knobs to themselves to sell virtual instruments.
That's a great way to put it. There's no way to fully reproduce that live sound, but compared to anything played through speakers, Pianoteq is indistinguishable from a real piano.
Out of the box it sounds a little too perfect, but just setting the Condition to the midway point (1.0) fixes that.
“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”
Ouch: this is completely inaccurate. Physical modeling has its roots in the 80s and Stefan Bilbao has been doing FDM based methods for over 20 years. I think he discusses fem in numerical sound sysnthesis
Also, aschkually, a violin is on the "easier" end of making it sound realistic. It's one of the "tutorial" models you go through when you start learning about this (resonators + reverb get you 80% there). Much harder to do any plucking sound (guitar, piano), and much much harder to model percussions accurately (cymbals, drums) and in such a way that the sound doesn't come out dry and very evidently synthetic.
Source: I was very invested into this in the 2000s, although as a hobby, not professionally.
Conical and cylindrical bores definitely differ but I don't see why they'd be different specifically with respect to the lip interaction, can you say more about that part?
My father is a luthier, and while he definitely needs to wait until the instrument is finished to hear the full sound, he also uses multiple techniques on parts of an unfinished violin to hear *some* sound. For example, he knocks on the top or back plate and listens to the sound it makes.
I don’t know how much of it is just voodoo, but he’s been doing it for 50 years, so I’m sure he noticed some correlation to the final sound by now. :) I'll have to ask him.
https://github.com/Qzping/ELGAR
It's just fun to see solutions to problems you didn't even know to exist.
Looking it up just now, it turns out that, "Modern physics research shows that the f-shape allows the instrument to push much more air than a traditional round hole, resulting in greater acoustic power and projection."
Just wanted to share in case someone else had that same bit of false knowledge in their head.