Top
Best
New

Posted by youngbrioche 4 days ago

When everyone has AI and the company still learns nothing(www.robert-glaser.de)
387 points | 273 commentspage 2
ch_ase 4 days ago|
It’s been helpful for me to look at the promise of AI by comparing with the dotcom boom. Lots of similarities.

But the internet was a simpler concept for businesses. Basically it was you can now sell to people from their computers. AI’s promise is what? It can approximate reasoning about things? This is much more challenging implementation puzzle to truly solve.

I don’t know that I’ve seen anything of real substance outside coding tasks yet.

dominictorresmo 4 days ago||
It's just ass to work in this area now. In the company I work, the bosses let everyone use it, even non-developers. I really want to quit and work in another area but unfortunately where I live a beginning salary can't pay a rent and I'm getting old
ncr100 4 days ago||
In many midsize companies I've worked as an engineer, developing a growth plan is something I have been asked to do, and to share it with my manager. Then at the end of the quarter I go over the growth plan and the manager agrees or disagrees with my actual growth, and gives me a performance bonus befitting my success..

I propose employees create self-training byproducts as a result of any AI interaction. And then they also work with their Cuban manager to make sure that these self-training byproducts are a part of their growth plan. This can guarantee growth without losing that opportunity To interact with the intelligent AI system (on topics that are relevant to the company's short, mid, and long-term strategic advantage,).

Cthulhu_ 4 days ago||
On the first part of the article, I believe it describes how individual productivity gains do not seem to translate to business / larger scale productivity. I think this is expected; individual developer productivity, code volume, LOC/day never was a valuable metric on a company scale. Number of delivered features might be one, but ultimately, revenue and customer growth etc are.

While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.

zidoo 4 days ago||
Once people try to increase quality instead of speed they will see how LLMs are powerful. Everything else is just sales pitch by Nvidia and friends.
wongarsu 4 days ago||
Even if LLMs write more buggy code they can still bring up software quality in the short to medium term by allowing you to clear out a lot of the backlog of bugs and UI issues that are known but never had enough priority to be fixed

Debugging and developing first fixes is also one of the spaces where current LLMs are the biggest force multipliers. Especially if you have reproduction cases the LLM can test on its own

But long-term it might look very different as more and more of the code becomes LLM written

MyHonestOpinon 4 days ago||
Make sense to me. I can see how LLMs can help you make better systems. I don't have a christal ball but I can see how focusing on speed (or more precisely volume) can have a lot of unintended consequences.
throw_m239339 4 days ago||
I love these posts because they are missing the elephant in the room. The company learns nothing, it develops no knowledge, however, OpenAI, Anthropic and co services are absolutely learning from the people that use these tools. You are training your own replacement.
jt654 4 days ago||
This is a great article. It helps you realize that the feedback loop is the goal but it won't just happen and traditional methodologies don't really support it. Has anyone here found a good way that promotes teams in a company to focus on the loop instead of productivity hack?
raffael_de 4 days ago||
My biggest gripe with language models is that technical and conceptual discussions which used to be led organically (with people having to think about what others wrote and decide what to reply themselves) now turned into AI slop avalanches with participants just copy pasting obviously generated text into the discussion. And those texts are always very long and super weird to respond to because they usually are overall correct enough so you can't just reject them but are flawed all over the place, missing the point, lacking depth where it would be important, skipping over important steps. This is a huge time waste. Funnily, many people have no idea how obvious it is that their texts are generated and rubbish at that.
i_think_so 4 days ago||
> one team uses Copilot as autocomplete and calls it a day. Another team runs Claude Code in tight loops, with tests, reviews, and constant steering. A product owner suddenly prototypes real software instead of mocking screens in Figma. A senior engineer delegates a root-cause analysis to an agent and comes back to the valid solution in under an hour; this would’ve taken him two weeks without AI. A junior person produces polished code but has no idea which architectural assumptions got smuggled into the system. A support team quietly turns recurring tickets into workflow automation, because they know exactly where the work hurts and nobody in the Center of Excellence ever asked the right question.

This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".

The title is just clickbait. The rest of the content are fluffy bunnies and rainbows. It's all summed up as "continue to consume product, but remember to also do X". Sales copy + HBR MBA bait.

The closest thing to an honest, less-than-rosy example is the "junior person" who has no idea about the code they committed.

What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?

Oh wait, scratch that last one. He left the company and is crashing out on his own.

Carry on, then.

Supermancho 4 days ago||
Indeed. Any developer who has used copilot knows you can't rely on it 100% The post's head image immediately bothered me. Copilot's strength is not on patching SDLC but to speed up the catching of typos and minor oversights. If you're using it as an integral part of SDLC, it causes problems immediately. So why posit the strawman? Marketing.
lbrito 4 days ago||
>What about the filesystem developer who fell in love with his chatbot girlfriend

Fear not: he has a place to feel welcome and included!

https://www.newsweek.com/inside-world-first-ai-dating-cafe-1...

lbrito 4 days ago|
>This cannot become employee surveillance...If people believe the organization is measuring whether they used enough AI, they will game the signals.

It already has; ship has sailed.

https://blog.pragmaticengineer.com/the-pulse-tokenmaxxing-as...

More comments...