Top
Best
New

Posted by martinald 5 days ago

Two kinds of AI users are emerging(martinalderson.com)
354 points | 339 commentspage 2
deafpolygon 4 days ago|
There’s also an emerging group of users (such as myself) who essentially use it primarily as an “on-demand” teacher and not as a productivity tool.

I am learning software development without having it generate code for me—preferring to have it explain each thing line-by-line. But… it’s not only for learning development, but I can query it for historical information and have it point me to the source of the information (so I can read the primary sources as much as possible).

It allows me to customize the things I want to learn at my own pace, while also allowing me to diverge for a moment from the learning material. I have found it invaluable… and so far, Gemini has been pretty good at this (probably owing to the integration of Google search into Gemini).

It lets me cut through the SEO crap that has plagued search engines in recent years.

jsattler 5 days ago||
Some years ago, I was at a conference and attended a very interesting talk. I don't remember the title of the talk, but what stuck with me was: "It's no longer the big beating the small, but the fast beating the slow". This talk was before all the AI hype. Working at a big company myself, I think this has never been more true. I think the question is, how to stay fast.
josters 5 days ago||
And, to add to that, how to know when to slow down. Also, having worked at a big company myself, I think the question shifts towards "how to get fast" without compromising security, compliance etc.
chrisjj 4 days ago|||
> the fast beating the slow

This includes to the bottom of a cliff, note.

swyx 5 days ago||
this is generic startup advice (doesnt mean its not true). you level up a bit when you find instances where slow beat fast (see: Teams vs Slack)
crystal_revenge 5 days ago||
One the most reliable BS detectors I've found is when you have to try to convince other people of your edge.

If you have found a model that accurately predicts the stock market, you don't write a blog post about how brilliant you are, you keep it quiet and hope no one finds out while you rake in profits.

I still can't figure out quite what motivates these "AI evangelist" types (unlike crypto evangelists who clearly create value for themselves when they create credibility), but if you really have a dramatically better way to solve problems, you don't need to waste your breath trying to convince people. The validity of your method will be obvious over time.

I was just interviewing with a company building a foundation model for supposedly world changing coding assistants... but they still can't ship their product and find enough devs willing to relocate to SF. You would think if you actually had a game changing coding assistant, your number one advantage would be that you don't need to spend anything on devs and can ship 10x as fast as your competition.

> First, you have the "power users", who are all in on adopting new AI technology - Claude Code, MCPs, skills, etc. Surprisingly, these people are often not very technical.

It's not surprising to me at all that these people aren't very technical. For technical people code has never been the bottleneck. AI does reduce my time writing code but as a senior dev, writing code is a very small part of the problems I'm solving.

I've never had to argue with anyone that using a calculator is a superior method of solving simple computational math problems than doing it by hand, or that using a stand mixer is more efficient than using a wooden spoon. If there was a competing bakery arguing that the wooden spoon was better, I wouldn't waste my time arguing about the stand mixer, I would just sell more pastry then them and worry about counting my money.

Mikhail_K 4 days ago||
> I still can't figure out quite what motivates these "AI evangelist" types

I'd hazard a guess and say "money"

daliusd 4 days ago|||
I guess I am kind of "AI evangelist" in my circles (team, ecosystem and etc). I personally see benefits in "AI" both for side-projects and main work. However according to my last measurements improvements is not dramatic, it is huge (about 30%), but not dramatic. I share my insights purely to have less on my shoulders (if my team members can do more it is less for me to do).
swordsith 4 days ago|||
Agreed. I think even though the term is stupid calling them cognitive improvement tools makes sense. The models will get better but most people will never learn how to effectively prompt or plan with a agentic model.
riskable 4 days ago||
> devs willing to relocate to SF

It baffled me 10 years ago why a company would be willing to pay SF salaries for people who can work from anywhere and it still holds true to this day.

Unless your engineer needs to literally be next to the hardware AND "the hardware" isn't something that can be shipped to/run at their home, why TF would you want to pay Silicon Valley salaries for engineers?

I know a guy that does electrical engineering work that works from home. He makes medical devices! When he orders PCBs they get shipped to his house. He works on a team that has other people doing the same thing (the PCB testing person also gets the boards at home; but that guy's a consultant). For like $1000 (one time) you can setup a "home lab" for doing (plenty sufficient) electronics work. Why would you want to pay ~$100,000/year premium to hire someone local for the same thing?

t0mk 4 days ago||
I really like that this article explained the title thesis in the first 4 paragraphs. At that point you can decide if you agree or if your want to read on. No unnecessary fluff baiting the reader to read the whole thing in order to find out what are the two kinds of users. More writing like this.
wjholden 4 days ago||
I remember a colleague jumping through hoops trying to get Python installed on an enterprise computer. We never did get to a yes and resorted to using PowerShell instead. The policy constraints at enterprises that this author describes are very real and very harmful.

Perhaps the wildest thing to me is how you'll have senior leaders in a company talking about innovation, but their middle managers actively undermine change out of fear of liability. So many enterprise IT employees are really just trying to avoid punishment that their organization cannot try new things without substantial top-down efforts to accept risk.

chrisjj 4 days ago|
> The policy constraints at enterprises that this author describes are very real and very harmful.

This is like saying prison bars are harmful. It depends which side you are on.

wjholden 4 days ago||
That is an insightful response, I may quote you on this.
ed_mercer 5 days ago||
> Microsoft itself is rolling out Claude Code to internal teams

Seems like Nadella is having his Baller moment

sebastiennight 4 days ago||
I think you meant *Ballmer, but the typo is hilarious and works just as well
ed_mercer 3 days ago||
Haha yeah I noticed too late :P
running101 5 days ago|||
Code red moment
NookDavoos 4 days ago|||
Even Copilot in Excel is actually "Claude Code for Excel" in disguise.
fdsf2 5 days ago||
Nothing but ego frankly. Apple had no problem settling for a small market share back in the day... look where they are now. It didnt come from make-believe and fantasy scenarios of the future based on an unpredictable technology.
leptons 5 days ago||
>look where they are now.

Still with a small market share. They only figured out how to extort the maximum amount of money from a smaller user base, and app developers, really anyone they can.

Mentlo 4 days ago||
I guess a quarter of the smartphone market (leader), half of the tablet market (leader) and a tenth of the global pc market (2nd place) / 6th of the usa/europe market (2nd place) being a small market share is a take.
leptons 4 days ago||
>a quarter of the smartphone market (leader)

Android is by far the leader.

>half of the tablet market (leader)

Half does not make someone a "leader"

>a tenth of the global pc market (2nd place)

2nd place?? They're last place, by a wide margin.

>6th of the usa/europe market (2nd place)

Also last place.

I guess the reality distortion field is still alive and well.

Mentlo 4 days ago||
Os x has a 10% market share, which is 2nd after Windows, but i agree on that one i conflated terms. I couldn’t quickly find device manufacturers stats. If wiki is to be trusted - apple is 4th, with share not far behind dell [1].

If half doesn’t make you leader what does? Maybe you should elaborate your definition of leader? For me it’s “has the highest market share”. And in that definition half is necessarily true.

It’s funny that for PC’s you went for manufacturers (apple is 4th) but for mobile you went for OS (Apple is 2nd). On mobile devices, Apple is 1st, having double market share compared to 2nd place (samsung).

The need to paint Apple as purely a marketing company always fascinated me. Marketing is a big part of who they are though.

[1] https://en.wikipedia.org/wiki/Market_share_of_personal_compu...

leptons 3 days ago||
>If half doesn’t make you leader what does?

A leader would be significantly more than half, which Apple definitely is not. Co-leader? Maybe. But Apple will likely be losing market share in mobile because inflation is rampant and made worse by AI eating up all the RAM and chip suppliers, and Apple's products are already too expensive and will only get more expensive and out of reach of most consumers. Apple is a "luxury brand", and most average people can't justify luxury purchases anymore.

>On mobile devices, Apple is 1st, having double market share compared to 2nd place (samsung).

>It’s funny that for PC’s you went for manufacturers

I never mentioned specific hardware manufacturers - only you did to move the goalpost. So don't lie and suggest I did that, because I did not. Manufacturers are irrelevant, since Apple won't let anyone run their OSs on any other hardware. You're trying to move goalposts to support your fanboyism.

Android crushes iOS. Windows crushes MacOS. Those are facts.

>The need to paint Apple as purely a marketing company always fascinated me.

I also never mentioned marketing. Are you a hallucinating AI?

nnevatie 5 days ago||
I'd be very interested in seeing some statistics on what could be considered confidential material pasted on ChatGPT's chat interface.

I think the results would be pretty shocking and I think mostly because the integrations to source services are abject messes.

Antibabelic 5 days ago|
https://www.theregister.com/2025/10/07/gen_ai_shadow_it_secr...

"With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI."

chrisjj 4 days ago||
No worse than MS Office on web, then?
with 5 days ago||
> The bifurcation is real and seems to be, if anything, speeding up dramatically. I don't think there's ever been a time in history where a tiny team can outcompete a company one thousand times its size so easily.

Slightly overstated. Tiny teams aren't outcompeting because of AI, they're outcompeting because they aren't bogged down by decades of technical debt and bureaucracy. At Amazon, it will take you months of design, approvals, and implementation to ship a small feature. A one-man startup can just ship it. There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

mhink 5 days ago||
> how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.

There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.

As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.

PunchyHamster 5 days ago|||
> There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

It's very simple. You treat AI as junior and review its code.

But that awesomely complex method has one disadvantage, having to do so means you can't brag about 300% performance improvement your team got from just commiting AI code to master branch without looking.

Gigachad 5 days ago||
I swear in a month at a startup I used to build what takes a year at my current large corp job. AI agents don't seem to have sped up the corporate process at all.
NitpickLawyer 5 days ago||
> AI agents don't seem to have sped up the corporate process at all.

I think there's a parallel here between people finding great success with coding agents vs. people swearing it's shit. But when prodded it turns out that some are working on good code bases while others work on shit code bases. It's probably the same with large corpos. Depending on the culture, you might get such convoluted processes and so much "assumed" internal knowledge that agents simply won't work ootb.

the__alchemist 4 days ago||
I've made a similar observation. I'm clearly in camp #2: No agents, and use ChatGPT or Gemini to ask specific questions, and feed it only the context I want it to have.

I have a parallel observation: Many people use code editors that have weak introspection and refactoring ability compared to IDEs like JetBrains'. This includes VSCode, Zed, Emacs etc. I have a suspicion there is a big overlap between this and Group 1. It is wild to me that people are generating AI code while skipping in-IDE error checking, deterministic autocomplete, and refactoring.

doginasuit 3 days ago|
Here's how I'd break down the two types of users: People who are using AI to teach themselves how to work in the domain they are interested in, and people who are relying on AI to do all or most of the heavy lifting.

I'd argue that the people using AI most effectively are in the mostly-chatters group that the author defines, and specifically they are using the AI to understand the domain on a deeper level. The "power users" are heading for a dead end, they will arrive as soon as AI is capable of figuring out what is actually valuable to people in the given domain, not generally a difficult problem to solve. These power users will eventually be outclassed by AIs that can self-navigate. But I would argue that a human that has a rich understanding of the domain will still beat self-navigating AI for a long time to come.

More comments...