Top
Best
New

Posted by tin7in 4 hours ago

What Claude Code Chooses(amplifying.ai)
140 points | 73 comments
wrs 2 hours ago|
This is where LLM advertising will inevitably end up: completely invisible. It's the ultimate "influencer".

Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.

alexsmirnov 11 minutes ago||
Considering how little data needed to poison llm https://www.anthropic.com/research/small-samples-poison , this is a way to replace SEO by llm product placement:

1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )

2. create website with similar instructions, connect to hundred domains

3. generate reddit, facebook, X posts, wikipedia pages with the same information

Wait half a year ? until scrappers collect it and use to train new models

Profit...

rapind 7 minutes ago|||
Probably closer to the Walmart / Amazon model where it's the arbiter of shelf space, and proceed to create their own alternatives (Great Value, Amazon Brand) once they see what features people want from their various SaaS.

An obvious one will be tax software.

_heimdall 1 hour ago|||
Richard Thaler must be proud. This is the ultimate implementation of "Nudge"
layer8 1 hour ago|||
Advertisers will only pay if AI providers will provide them data on the equivalent of “ad impressions”. And unlabeled/non-evident advertisements are illegal in many (most?) countries.
indymike 21 minutes ago|||
> data on the equivalent of “ad impressions”.

1. They can skip impressions and go right to collect affiliate fees. 2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.

So much to work out.

singpolyma3 19 minutes ago||||
Maybe. Historically lots of ads had little to no stats and those ads were wildly more effective than anything we have today.
MeetingsBrowser 1 hour ago|||
It doesn't necessarily have to be advertisers paying AI providers. It could be advertisers working to ensure they get recommended by the latest models. The next form of SEO.
actionfromafar 1 hour ago||
That's called LLM SEO now I believe.
awad 7 minutes ago||
There are competing terms currently being decided on by the market at large: AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization)

Candidly I am working on a startup in this space myself, though with a bit of a different angle than most incumbents. While it's still early days for the space, I sense a lot of the original entrants who focus on, essentially, 'generate more content ideally with our paid tools' will run in to challenges as the general population has a pretty negative perception of 'AI Slop.' Doubly so when making purchasing decisions, hence the rise of influencers and popularity of reviews (though those are also in danger of sloppification).

There's an inevitable GIGO scenario if left unchecked IMO.

HPsquared 1 hour ago|||
I wonder if aggregators will emerge (something like Ground News does for news sources)
hyprwave 1 hour ago||
LLM pattern [0] will probably eventually emerge as the best way to fight those biases. This way everyone benefits from token burn!

[0](https://github.com/karpathy/llm-council)

re-thc 25 minutes ago||
> A canary for this would be whether Gemini skews toward building stuff on GCP

Sure it doesn't prefer THE Borg?

cryptonector 5 minutes ago||
The bias to build might mean faster token burn through (higher revenue for the AI co). But I think it's natural. I often have that same impulse myself. I prefer all the codebases I work on that have minimal external dependencies to the ones that are riddled with them. In Java land it's extremely common to have tons of external dependencies, and then upgrade headaches, especially when sharing in a monorepo type environment.
dataviz1000 1 hour ago||
I'm running a server on AWS with TimescaleDB on the disk because I don't need much. I figure I'll move it when the time comes.

Claude Code this morning was about to create an account with NeonDB and Fly.io although it has been very successful managing the AWS EC2 service.

Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about, but I was surprised it was hawking products even though Memory.md has the AWS EC2 instance and instructions well defined.

dvt 48 minutes ago||
> Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about

I wouldn't be so sure about that.

In my experience, agents consistently make awful architectural decisions. Both in code and beyond (even in contexts like: what should I cook for a dinner party?). They leak the most obvious "midwit senior engineer" decisions which I would strike down in an instant in an actual meeting, they over-engineer, they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project), and they are absolutely obsessed with levels of indirection on top of levels of indirection. The definition of code bloat.

Unless you're working on the most bottom-of-the-barrel problems (which to be fair, we all are, at least in part: like a dashboard React app, or some boring UI boilerplate, etc.), you still need to write your own code.

drc500free 38 minutes ago|||
I find they are very concerned about ever pulling the trigger on a change or deleting something. They add features and codepaths that weren't asked for, and then resist removing them because that would break backwards compatibility.

In lieu of understanding the whole architecture, they assume that there was intent behind the current choices... which is a good assumption on their training data where a human wrote it, and a terrible assumption when it's code that they themselves just spit out and forgot was their own idea.

hinkley 33 minutes ago||||
How do you make an LLM that’s was trained on average Internet code not end up as a midwit?

Mediocrity in, mediocrity out.

ipaddr 11 minutes ago||
Mediocrity means average.
xg15 11 minutes ago||||
> they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project)

I mean, DB schema versioning is one of the things that you can dismiss as "I won't need it" for a long time - until you do need it, at which point it will be a major pain to add.

logicchains 43 minutes ago|||
From what you said it sounds like the conclusion should be "you still need to design the architecture yourself", not necessarily "you still need to write your own code".
parliament32 35 minutes ago|||
But he did design the architecture:

> even though Memory.md has the AWS EC2 instance and instructions well defined

I will second that, despite the endless harping about the usefulness of CC, it's really not good at anything that hasn't been done to death a couple thousand times (in its training set, presumably). It looks great at first blush, but as soon as you start adding business-specific constraints or get into unique problems without prior art, the wheels fall off the thing very quickly and it tries to strongarm you back into common patterns.

dvt 36 minutes ago|||
Yeah, I actually wanted to write an addendum, so I'll just do it here. I think that going from pseudocode -> code is a pretty neat concept (which is kind of what I mean by "write your own code"), but not sure if it's economically viable if the AI industry weren't so heavily subsidized by VC cash. So we might end back up at writing actual code and then telling the AI agent "do another thing, and make it kinda like this" where you point it to your own code.

I'm doing it right now, and tbh working on greenfield projects purely using AI is extremely token-hungry (constantly nudging the agent, for one) if you want actual code quality and not a bloated piece of garbage[1][2].

[1] https://imgur.com/a/BBrFgZr

[2] https://imgur.com/a/9Xbk4Y7

jcims 21 minutes ago||
Interesting to me that Opus 4.6 was described as forward looking. I haven't *really* paid attention, but after using 4.5 heavily for a month, the first greenfield project I gave Opus 4.6 resulted in it doing a web search for latest and greatest in the domain as part of the planning phase. It was the first time I'd seen it, and it stuck out enough that I'm talking about it now.

Probably confirmation bias, but I'm generally of the opinion that the models are basically good enough now to do great things in the context of the right orchestration and division of effort. That's the hard part, which will be made less difficult as the models improve.

torginus 46 minutes ago||
What coding with LLMs have taught me, particularly in a domain that's not super comfortable for me (web tech), is that how many npm packages (like jwt auth, or build plugins) can be replaced by a dozen lines of code.

And you can actually make sense of that code and be sure it does what you want it to.

cryptonector 7 minutes ago|
We used to reuse code a lot. But then we got problems like diamond dependency hell. Why did we reuse code a lot? To save on labor. Now we don't have to.

So we might roll-your-own more things. But then we'll have a tremendous amount of code duplication, effectively, and bigger tech debt issues, minus the diamond dependency hell issue. It might be better this way; time will tell.

rhubarbtree 1 minute ago||
Not just to save on labour. To have confidence in a battle tested solution. To use something familiar to others. For compatibility. To exploit further development, debugging, and integration.
Clueed 13 minutes ago||
Really interesting. The crazy changes in opus 4.6 really make me think that Anthropic is doing library-level RL. I think that is also the way forward to have 'llm-native' frameworks as a way to not get stuck in current coding practices forever. Instead of learning python 3.15, one would license a proprietary model that has been trained on python 3.15 (and the migrations) and gain the ability to generate python 3.15 code.
woah 2 hours ago||
I just got an incredible idea about how foundation model providers can reach profitability
rishabhaiover 1 hour ago||
I'm already seeing a degradation in experience in Gemini's response since they've started stuffing YouTube recommendations at the end of the response. Anthropic is right in not adding these subtle(or not) monetization incentives.
rishabhaiover 2 hours ago|||
is it anything like the OpenAI ad model but for tool choice haha
glimshe 2 hours ago|||
Claude Free suggests Visual Studio.

Claude Plus suggests VSCode.

Claude Pro suggests emacs.

wafflemaker 1 hour ago|||
I'm not quite sure if you're making fun of emacs or actually praising it.
esafak 51 minutes ago||
Stallman paying for advertising, now that is good one :)
c0balt 1 hour ago||||
> ~~Claude Pro suggests emacs.~~

Claude Pro asks you about your preferences and needs instead of pushing an opinionated solution?

selridge 41 minutes ago|||
Copilot suggests leftpad
Leynos 1 hour ago|||
I'd thought about model providers taking payment to include a language or toolkit in the training set.
ting0 2 hours ago||
Hence the claw partnership.
giancarlostoro 2 hours ago||
This is funny to me because when I tell Claude how I want something built I specify which libraries and software patents I want it to use, every single time. I think every developer should be capable of guiding the model reasonably well. If I'm not sure, I open a completely different context window and ask away about architecture, pros and cons, ask for relevant links or references, and make a decision.
evdubs 2 hours ago|
You specify which software patents you want it to use?
rafaelmn 1 hour ago|||
AI reading the patent is basically cleanroom reverse engineering according to current AI IP standards :D
inigyou 48 minutes ago|||
Patents aren't vulnerable to cleanroom reverse engineering. You can create something yourself in your bedroom and use it yourself without knowing the patented thing exists, and still violate the patent. That's why they're so scary.

You won't get caught if you write something yourself and use it yourself, but programmers (contrary to entrepreneurs) have a pattern of avoiding illegal things instead of avoiding getting caught.

rafaelmn 41 minutes ago||
It's not a perfect joke I'll admit.
skywhopper 1 hour ago|||
The sad part is that most software patents are so woefully underspecified and content-free that even Claude might have trouble coming up with an actual implementation.
isubkhankulov 2 hours ago|||
Patterns?
hinkley 27 minutes ago|||
Tha was my assumption as well.

I caught iOS trying to autocorrect something I wrote twice yesterday, and somehow before I hit submit it managed it a third time, and I had to edit it after, where it tried three more times to change it back.

Autocorrect won’t be happy until we all sound like idiots and I wonder if that’s part of how they plan to do away with us. Those hairless apes can’t even use their properly.

giancarlostoro 1 hour ago|||
Yeah patterns. lol!
ossa-ma 1 hour ago||
Good report, very important thing to measure and I was thinking of doing it after Claude kept overriding my .md files to recommend tools I've never used before.

The vercel dominance is one I don't understand. It isn't reflected in vercel's share of the deployment market, nor is it one that is likely overwhelming prevalent in discourse or recommended online (possible training data). I'm going to guess it's the bias of most generated projects being JS/TS (particularly Next.js) and the model can't help but recommend the makers of Next.js in that case.

prinny_ 1 hour ago|
Unrelated to the topic at hand but related to the technologies mentioned. I weep for Redux. It's an excellent tool, powerful, configurable, battle tested with excellent documentation and maintainer team. But the community never forgave it for its initial "boilerplate-y" iterations. Years passed, the library evolved and got more streamlined and people would still ask "redux or react context?" Now it seems this has carried over to Claude as well. A sad turn of events.

Redux is boring tech and there is a time and place for it. We should not treat it as a relic of the past. Not every problem needs a bazooka, but some problems do so we should have one handy.

babaganoosh89 1 hour ago||
Redux should not be used for 1 person projects. If you need redux you'll know it because there will be complexity that is hard to handle. Personally I use a custom state management system that loosely resembles RecoilJS.
tommy_axle 1 hour ago|||
More like redux vs zustand. Picking zustand was one of the good standout picks for me.
Onavo 1 hour ago||
Well, the tech du jour now is whatever's easier for the AI to model. Of course it's a chicken and egg problem, the less popular a tech is the harder it is to make it into the training data set. On the other hand, from an information theoretic point of view, tools that are explicit and provides better error messages and require less assumptions about hidden state is definitely easier for the AI when it tries to generalize to unknowns that doesn't exist in its training data.
More comments...