>All three sets of worries—misalignment, concentration of power in a private company, and normal concerns like job loss—motivate the government to tighten its control.
A private company becoming "too powerful" is a non issue for governments, unless a drone army is somewhere in that timeline. Fun fact the former head of the NSA sits on the board of Open AI.
Job loss is a non issue, if there are corresponding economic gains they can be redistributed.
"Alignment" is too far into the fiction side of sci-fi. Anthropomorphizing today's AI is tantamount to mental illness.
"But really, what if AGI?" We either get the final say or we don't. If we're dumb enough to hand over all responsibility to an unproven agent and we get burned, then serves us right for being lazy. But if we forge ahead anyway and AGI becomes something beyond review, we still have the final say on the power switch.
> the AIs can do everything taught by a CS degree
no, they fucking can't. not at all. not even close. I feel like I'm taking crazy pills. Does anyone really think this?
Why have I not seen -any- complete software created via vibe coding yet?
If this article were a AI model, it would be catastrophically overfit.
I wonder which jobs would not be automated? Therapy? HR?
But I view LLMs not as a path to AGI on their own. I think they're really great at being text engines and for human interfacing but there will need to be other models for the actual thinking. Instead of having just one model (the LLM) doing everything, I think there will be a hive of different more specific purpose models and the LLM will be how they communicate with us. That solves so many problems that we currently have by using LLMs for things they were never meant to do.