Posted by minimalthinker 7 hours ago
For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.
I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don't know what they don't know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they'd do if something goes wrong.
The operator is still a factor.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
The parents are saying they'd rather vibe code themselves than trust an unproven engineering firm that does(n't) vibe code.
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
So, will they? Probably. Can you trust the kind of LLM that you would use to do a better job than the cheapest firm? Absolutely.
You know, now that I'm thinking about it, I'm beginning to wonder if poor data privacy could have some negative effects.
Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi/standards like Lab Streaming Layer for connecting to a hodgepodge of devices.
This particular data?
Probably not so useful. While it’s easy to get something out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.
Google for a list of all the exceptions to HIPPA. There are a lot of things that _seem_ like they should be covered by HIPPA but are not...
Baby's gotta get some cash somewhere.
Not everybody gets it.
I believe there was some good that came from last months decision to be more open to what apps and data can say without going through huge regulatory processes (though because we apply auditory stimulation, this doesn't apply to us), however, there should be at least regulatory requirements for data security.
We've developed all of our algorithms and processing to happen on device, which is required anyway due to the latency which would result from bluetooth connections, but even the data sent to the server is all encrypted. I'd think that would be the basics. How do you trust a company with monitoring, and apparently providing stimulation, if they don't take these simple steps?
Like, don't actually do it, but I feel like there's inspiration for a sci-fi novel or short story there.
I have deployed open MQTT to the world for quick prototypes on non personal (and healthcare) data. Once my cloud provider told me to stop because they didn’t like it, that could be used for relay DDOS attacks.
I would not trust the sleep mask company even if they somehow manage to have some authentication and authorisation on their MQTT.
(Also, "We're not happy until you're not happy.")
The K, of course, stands for Ka-ching!
It’s wasteful not to save and learn from those.
The difference is when it's a sleep mask, someone reads your brainwaves. When it's a cloud credential, someone reads your customer database. Per-device or per-environment credential provisioning isn't even hard anymore. AWS has IAM roles, IoT has device certificates, MQTT has client certs and topic ACLs. The tooling exists. Companies skip it because key management adds a step to the assembly line and nobody budgets time for security architecture on v1.
It’s quite literally why the internet is so insecure, because at many points all along the way, “hey, should we design and architect for security?” is/was met with “no, we have people to impress and careers to advance with parlor tricks to secure more funding; besides, security is hard and we don’t actually know what we are doing, so tow the line or you’ll be removed.”
Almost out of a Phillip K Dick novel
Lowering the skills bar needed to reverse engineer at this level could have its own AI-related implications.