Posted by martinald 5 days ago
I am learning software development without having it generate code for me—preferring to have it explain each thing line-by-line. But… it’s not only for learning development, but I can query it for historical information and have it point me to the source of the information (so I can read the primary sources as much as possible).
It allows me to customize the things I want to learn at my own pace, while also allowing me to diverge for a moment from the learning material. I have found it invaluable… and so far, Gemini has been pretty good at this (probably owing to the integration of Google search into Gemini).
It lets me cut through the SEO crap that has plagued search engines in recent years.
This includes to the bottom of a cliff, note.
If you have found a model that accurately predicts the stock market, you don't write a blog post about how brilliant you are, you keep it quiet and hope no one finds out while you rake in profits.
I still can't figure out quite what motivates these "AI evangelist" types (unlike crypto evangelists who clearly create value for themselves when they create credibility), but if you really have a dramatically better way to solve problems, you don't need to waste your breath trying to convince people. The validity of your method will be obvious over time.
I was just interviewing with a company building a foundation model for supposedly world changing coding assistants... but they still can't ship their product and find enough devs willing to relocate to SF. You would think if you actually had a game changing coding assistant, your number one advantage would be that you don't need to spend anything on devs and can ship 10x as fast as your competition.
> First, you have the "power users", who are all in on adopting new AI technology - Claude Code, MCPs, skills, etc. Surprisingly, these people are often not very technical.
It's not surprising to me at all that these people aren't very technical. For technical people code has never been the bottleneck. AI does reduce my time writing code but as a senior dev, writing code is a very small part of the problems I'm solving.
I've never had to argue with anyone that using a calculator is a superior method of solving simple computational math problems than doing it by hand, or that using a stand mixer is more efficient than using a wooden spoon. If there was a competing bakery arguing that the wooden spoon was better, I wouldn't waste my time arguing about the stand mixer, I would just sell more pastry then them and worry about counting my money.
I'd hazard a guess and say "money"
It baffled me 10 years ago why a company would be willing to pay SF salaries for people who can work from anywhere and it still holds true to this day.
Unless your engineer needs to literally be next to the hardware AND "the hardware" isn't something that can be shipped to/run at their home, why TF would you want to pay Silicon Valley salaries for engineers?
I know a guy that does electrical engineering work that works from home. He makes medical devices! When he orders PCBs they get shipped to his house. He works on a team that has other people doing the same thing (the PCB testing person also gets the boards at home; but that guy's a consultant). For like $1000 (one time) you can setup a "home lab" for doing (plenty sufficient) electronics work. Why would you want to pay ~$100,000/year premium to hire someone local for the same thing?
Perhaps the wildest thing to me is how you'll have senior leaders in a company talking about innovation, but their middle managers actively undermine change out of fear of liability. So many enterprise IT employees are really just trying to avoid punishment that their organization cannot try new things without substantial top-down efforts to accept risk.
This is like saying prison bars are harmful. It depends which side you are on.
Seems like Nadella is having his Baller moment
Still with a small market share. They only figured out how to extort the maximum amount of money from a smaller user base, and app developers, really anyone they can.
Android is by far the leader.
>half of the tablet market (leader)
Half does not make someone a "leader"
>a tenth of the global pc market (2nd place)
2nd place?? They're last place, by a wide margin.
>6th of the usa/europe market (2nd place)
Also last place.
I guess the reality distortion field is still alive and well.
If half doesn’t make you leader what does? Maybe you should elaborate your definition of leader? For me it’s “has the highest market share”. And in that definition half is necessarily true.
It’s funny that for PC’s you went for manufacturers (apple is 4th) but for mobile you went for OS (Apple is 2nd). On mobile devices, Apple is 1st, having double market share compared to 2nd place (samsung).
The need to paint Apple as purely a marketing company always fascinated me. Marketing is a big part of who they are though.
[1] https://en.wikipedia.org/wiki/Market_share_of_personal_compu...
A leader would be significantly more than half, which Apple definitely is not. Co-leader? Maybe. But Apple will likely be losing market share in mobile because inflation is rampant and made worse by AI eating up all the RAM and chip suppliers, and Apple's products are already too expensive and will only get more expensive and out of reach of most consumers. Apple is a "luxury brand", and most average people can't justify luxury purchases anymore.
>On mobile devices, Apple is 1st, having double market share compared to 2nd place (samsung).
>It’s funny that for PC’s you went for manufacturers
I never mentioned specific hardware manufacturers - only you did to move the goalpost. So don't lie and suggest I did that, because I did not. Manufacturers are irrelevant, since Apple won't let anyone run their OSs on any other hardware. You're trying to move goalposts to support your fanboyism.
Android crushes iOS. Windows crushes MacOS. Those are facts.
>The need to paint Apple as purely a marketing company always fascinated me.
I also never mentioned marketing. Are you a hallucinating AI?
I think the results would be pretty shocking and I think mostly because the integrations to source services are abject messes.
"With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI."
Slightly overstated. Tiny teams aren't outcompeting because of AI, they're outcompeting because they aren't bogged down by decades of technical debt and bureaucracy. At Amazon, it will take you months of design, approvals, and implementation to ship a small feature. A one-man startup can just ship it. There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.
Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.
There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.
As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.
It's very simple. You treat AI as junior and review its code.
But that awesomely complex method has one disadvantage, having to do so means you can't brag about 300% performance improvement your team got from just commiting AI code to master branch without looking.
I think there's a parallel here between people finding great success with coding agents vs. people swearing it's shit. But when prodded it turns out that some are working on good code bases while others work on shit code bases. It's probably the same with large corpos. Depending on the culture, you might get such convoluted processes and so much "assumed" internal knowledge that agents simply won't work ootb.
I have a parallel observation: Many people use code editors that have weak introspection and refactoring ability compared to IDEs like JetBrains'. This includes VSCode, Zed, Emacs etc. I have a suspicion there is a big overlap between this and Group 1. It is wild to me that people are generating AI code while skipping in-IDE error checking, deterministic autocomplete, and refactoring.
I'd argue that the people using AI most effectively are in the mostly-chatters group that the author defines, and specifically they are using the AI to understand the domain on a deeper level. The "power users" are heading for a dead end, they will arrive as soon as AI is capable of figuring out what is actually valuable to people in the given domain, not generally a difficult problem to solve. These power users will eventually be outclassed by AIs that can self-navigate. But I would argue that a human that has a rich understanding of the domain will still beat self-navigating AI for a long time to come.