Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
Since AI tools make it extremely easy to get started, it's really easy to begin half a dozen different projects, feel like you're being productive, but actually accomplish nothing.
This accurately described how I used to utilize AI – and my ChatGPT history is filled with all sorts of grandiose project plans. But lately I've been more and more narrow with what I actually prompt.
This leads me to think that a chatbox is not the best UI for using AI, as it's too open-ended and too prone to give you long, broad answers, rather than hyper-specific ones.
- good for me in the short term (e.g., I can fulfill what my company asks from me)
- good for the company in the short term (see above)
- bad for me in the long-term. E.g, I'm starting to become more and more replaceable at my job; I don't have the same depth of understanding of the systems we're building as I used to; my peers and I collaborate way less now (instead of talking to each other, we just ask claude directly); and there's not much to be proud of in my day-to-day work (we're not building CRUDs, but we're not building netflix either, it's something in between). The compounding effect worries me too: every shortcut I take today is a piece of context I'm not internalizing, a debugging instinct I'm not sharpening, a tradeoff I;m not learning to weigh. The skills that used to differentiate me are slowly atrophying. We're all individually more "productive" on paper, but collectively i think we're gonna end up with a codebase nobody fully understands and a team that barely knows each other
- good for the company in the long-term: they can fire me easily, they don't need 80% of us anymore. They can just pay anthropic for the agents instead. They don't need people to maintain or read the codebase either: agents do that now. And executives never really cared about us in the first place, so that part hasn't changed I guess. The math is simple from their side: headcount is the biggest line item, and agents don't ask for raises, don't burn out, don't go on leave, and dont push back when leadership makes a dumb call. We're the worst part of the business on a spreadsheet, and the tools to replace us are finally cheap enough that someone is gonna pull the trigger
I'm not a superstar engineer. I know that. I'm probably in the 80% bag of engineers out there. Some of you may be in the top 20%, and you probably gonna keep your job somehow (or not, who knows). But for the rest of us, I think we simply cannot compete anymore.
I regret every single time I've used AI so far. Nothing good has come from it for me; the feeling is so different from any other technology I've used in the past (frameworks, languages, libraries, whatever): it used to be fun, it improved my career prospects, it expanded my knowledge. AI/LLMs are precisely the opposite: it's not fun, it's making my career worse, and it's not expanding my knowledge.
I CANNOT UNDERSTAND HOW MOST OF US, ENGINEERS, ARE OUT HERE VOUCHING FOR AI. WE ARE LITERALLY CHEERING ON THE THING THAT IS COMING FOR OUR JOBS, AND WE'RE DOING IT FOR FREE, POSTING BENCHMARKS AND EVANGELIZING IT TO OUR MANAGERS LIKE WE'RE GETTING A COMMISSION. WE ARE NOT. THE LABS AND THE EXECS GET PAID. WE're HANDING THEM THE ROPEI would add these points to negative long-term personal effects:
- potential for cognitive impairment / deficit from long-term AI use.
- lack of diversity / creativity / heterogeneity / outside the box thinking of any sort in work going forward.
My questions are: will the AI get to be above our level at creating grokkable source code before it comes unmanageable? And even if not: will the models' ability to understand and modify slop outpace it's ability to create it?
For our jobs, I hope neither is true. But we'll see. Even in the best case we'll have a lot of cleaning up to do.