Posted by vasco_ 4 days ago
Most pALS have difficulties speaking, their voice is weak and sometimes there is a pronunciation problem. Many pALS use simple assistive devices like a personal voice amplifier, or a text to speech device.
Sometimes the problem is that "normal" people think pALS are deaf so they speak loudly but the patient can hear quite normally. Or doctors or other people ignore the patient and insist on speaking to the carer as if the patient cannot comprehend it.
https://alsnewstoday.com/forums/forums/topic/artificial-voic...
I would be very curious to hear about interviews with the patients (conducted through their current means of communication, eg: eye gaze interfaces). Are they finding that the speech generated by the system reflects their intentions accurately?
EDIT: the EEG peripheral they are using is 4 channels / 250 Hz sample rate. I freely admit I have little knowledge of neuroscience and happily defer to the experts, but that really doesn't seem like a lot of data to be able to infer speech intentions.
I’m just saying that EEG data is so unreliable and requires so much calibration/training per person that reliably isolating speech in paralyzed patient would be a significant development.
Patients with locked-in syndrome (one of the use cases mentioned in the article, also called a pseudo-coma), or with other disorders of consciousness, are unable to protest, or to confirm the accuracy of the generative message which is being attributed to them. Communicating on your own terms and in your own words is fundamental to human dignity.
Meanwhile, this coincides with lukewarm reception of generative AI from consumers; perhaps it is the lack of autonomy of locked-in patients, which makes them an interesting segment to this new generation of ventures, scrambling for a ROI on the enormous over-investment in the sector.
The conference venues look lush tho.
[0] https://en.wikipedia.org/wiki/Electroencephalography#Artifac...
That said, I was just a high schooler and so my method of collecting training data was to run the script and "think really hard about moving left". Probably could have been a good deal more sophisticated too.
It's an extremely powerful tool for diagnosis of a limited range of conditions but it is not magic. Electrical signal gets attenuated heavily when signals are not on the outside of the brain. Even still, a headband like this is susceptible to noise from movement and other factors. You either need to correct for this with AI, which introduces a second source for error, or you need a very still user. I'm not convinced by the ability to "read minds" with the technology; I would need the man in the video answer some specific questions to be convinced.
Is this better than not being able to communicate at all? Yes.
If they don't find that it aligns at all, then honestly that is worse than nothing. Imagine being locked in and your family communicating with an LLM pretending to be you - all while you have to watch and can't do anything about it.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."