Posted by reasonableklout 8 hours ago
I don't think it's super clear what we'll find out.
We've all built the moat of our careers out of our expertise.
It is also very possible that expertise will be rendered significantly less valuable as the models improve.
Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.
Given the state of the industry were clearly going to find out one way or the other, hah!
I think some companies will find out that their senior engineers were providing more value and software stability than they gave them credit for!
Corporate feedback loops are very slow though, partly because management don't like to admit mistakes, and partly because of false success reporting up the chain. I'd not be surprised if it takes 5 years or more before there is any recognition of harm being done by AI, and quiet reversion to practices that worked better.
If you're not doing AI there's an incredibly limited pool of people who will give you $$$ ... and you're competing with EVERY OTHER NON-AI COMPANY for their attention.
You should not release a product into the market unless you have a good enough product that can keep you and your client compliant, safe and secure - including not leaking their customer info all over the place.
Prompt injection risk, etc. are massive for agentic AI without deterministic guardrails that actually work in practice.
Stop testing in production if you're shipping in a regulated industry. Ridic!
If you're not technical, you can get someone who is after signs of p-m fit, demos, but BEFORE deployment. This is common sense and best practices but startup bros dgaf because they're just good at sales and marketing & short term greedy.
Comical.