Posted by akrylov 22 hours ago
And if it were, and the result were like Elon and Scam Altman say it would destroy the economy. Not sure any country wants to lead the race to self destruction.
The winner here will be whoever can move atoms with AI not take notes at the daily standup.
i.e. Think boston dynamics vs unitree
They're both doing well but I'd lean towards China is winning on atoms in light of a huge manufacturing base they can AI-ify.
You can tell we're on the cusp when level 5 self driving cars are common an you have multiple companies deploying them on the street. Google is doing great work but the poured TONS of effort into it and the thing still needs intense stacks of perception and processing. Much more than I've seen any humanoids pour into it.
L5 SDV's are much easier to get than humanoids and the have tangible economic benefit. My thesis is that those will come first.
This doesn't really argue against your point, because the standards are what they are, and like I said, I have no idea how one would go about changing them if one even decided they wanted to. And given what they are, it has taken, as you point out, enormous amounts of effort to reach those standards in a practical way.
That all being said, while I agree that SDV's are in many respects easier than other robotics tasks, they are also somewhat uniquely dangerous. Other categories of task, while potentially more complicated, won't have to worry nearly so much about safety, and so may be operating under a different constraint regime. I think this means that we may see adoption happen at a much more accelerated rate than we have seen in the automotive space.
So far, they are not.
I haven't seen good stats on Tesla (they are less transparent than Waymo), but it would shock me if they weren't also at least slightly safer than the average human driver. Human drivers are really bad at driving.
But even if Tesla isn't safer, taken as a whole, the self driving industry as it currently exists still probably is, purely because it's mostly Waymo, and Waymo is dramatically safer.
If free cheap energy is unlocked today I reckon it would still take a good 30 years for that to ripple through properly.
It solves lots of problems (water!) but doesn't make the heavy machinery to consume it instantly appear.
Why would an American company outsource manufacturing to China if the labor cost is the same in both places? The entire reason the Chinese manufacturing base exists is to exploit cheap labor.
What would be the point of shipping products across the ocean?
And, if you need changes, you can go talk to them the same day you see a problem.
>Frontier cyber models may push states and defense firms toward the opposite logic: security by obscurity, with closed software, closed tooling, closed firmware, and closed chips. If a model cannot train on the code and architecture of a target stack, it will usually have less context and less speed. That does not make systems safe, but it does raise the value of proprietary stacks all the way down to hardware.
Is this really true. Are there any experts who can weigh in on this.
Should we interpret this to mean that in the new world Windows is more resistant to attacks than say Linux.
I think “security through obscurity is no security” concept was aimed toward people not relying on obscurity alone as a security mechanism. And largely that message succeeded. But now we are in a rapid acceleration of capabilities (on both sides) where any advantage to one side will result in outsized gains, at least in the short term.
And basically all the security bugs I've read about were find looking on the source code.
But it doesn't mean windows is more secure, just image a scenario where someone is stealing windows source code and sell it to rogue actor, it will make it even less secure because no one (expect windows) would have had the chance to search for bugs in the source code.
LLMs can read assembly better than most, so probably not. But reality has never stopped people from trying to obfuscate.
I feel like the author (and perhaps many here on HN) are on a different planet than almost everyone I interact with.
Opening up comments to see top comments are 90% "NO U" without any substantial discussion - you disappoint me, HN.
Most businesses are adding limitations on using open models.
My business's integration literally has a dropdown for which model you want to use. I think that's pretty standard.
Is it just that the subject line alone is a springboard for casual discussion? If so, maybe that's fine, but then, it feels like we'd be better off cultivating these discussions as "ask HN" posts instead of boosting this kind of web content.
I think this has been the case on many sites, for decades. Many people just want to read and write comments without engaging with the OP.
Have a look at this Reddit thread [0] about this Ars Technica article [1] - both are 15 years old.
I suppose in the 2010s this was an amusing detail of online discussion. In the 2020s it makes me feel a little uneasy - it suggests that the entire concept of people jumping from site to site, clicking links and understanding what they are writing about was flawed from the start. No wonder the internet became centralized and slopified.
And no, I didn’t read the OP, I found your comment to be more interesting to discuss. These days with AI articles flooding the internet it seems foolish to actually read articles before the comments.
Edit: although we have to contend with AI generated comments as well. I wonder how many of the comments on this page actually have original insights into the politico economics of AI.
[0] https://old.reddit.com/r/WTF/comments/gz9k7/the_internet_is_...
[1] https://arstechnica.com/science/2011/04/guns-in-the-home-lot...
Even if any of the US corporations would eventually end up in a scenario where their revenue is at least as high as their inference cost, what harm would that do to the other contenders? It's not as if there is any kind of network effect here that would exlude them from market participation.