Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.
1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )
2. create website with similar instructions, connect to hundred domains
3. generate reddit, facebook, X posts, wikipedia pages with the same information
Wait half a year ? until scrappers collect it and use to train new models
Profit...
An obvious one will be tax software.
1. They can skip impressions and go right to collect affiliate fees. 2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.
So much to work out.
Candidly I am working on a startup in this space myself, though with a bit of a different angle than most incumbents. While it's still early days for the space, I sense a lot of the original entrants who focus on, essentially, 'generate more content ideally with our paid tools' will run in to challenges as the general population has a pretty negative perception of 'AI Slop.' Doubly so when making purchasing decisions, hence the rise of influencers and popularity of reviews (though those are also in danger of sloppification).
There's an inevitable GIGO scenario if left unchecked IMO.
Sure it doesn't prefer THE Borg?
Claude Code this morning was about to create an account with NeonDB and Fly.io although it has been very successful managing the AWS EC2 service.
Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about, but I was surprised it was hawking products even though Memory.md has the AWS EC2 instance and instructions well defined.
I wouldn't be so sure about that.
In my experience, agents consistently make awful architectural decisions. Both in code and beyond (even in contexts like: what should I cook for a dinner party?). They leak the most obvious "midwit senior engineer" decisions which I would strike down in an instant in an actual meeting, they over-engineer, they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project), and they are absolutely obsessed with levels of indirection on top of levels of indirection. The definition of code bloat.
Unless you're working on the most bottom-of-the-barrel problems (which to be fair, we all are, at least in part: like a dashboard React app, or some boring UI boilerplate, etc.), you still need to write your own code.
In lieu of understanding the whole architecture, they assume that there was intent behind the current choices... which is a good assumption on their training data where a human wrote it, and a terrible assumption when it's code that they themselves just spit out and forgot was their own idea.
Mediocrity in, mediocrity out.
I mean, DB schema versioning is one of the things that you can dismiss as "I won't need it" for a long time - until you do need it, at which point it will be a major pain to add.
> even though Memory.md has the AWS EC2 instance and instructions well defined
I will second that, despite the endless harping about the usefulness of CC, it's really not good at anything that hasn't been done to death a couple thousand times (in its training set, presumably). It looks great at first blush, but as soon as you start adding business-specific constraints or get into unique problems without prior art, the wheels fall off the thing very quickly and it tries to strongarm you back into common patterns.
I'm doing it right now, and tbh working on greenfield projects purely using AI is extremely token-hungry (constantly nudging the agent, for one) if you want actual code quality and not a bloated piece of garbage[1][2].
Probably confirmation bias, but I'm generally of the opinion that the models are basically good enough now to do great things in the context of the right orchestration and division of effort. That's the hard part, which will be made less difficult as the models improve.
And you can actually make sense of that code and be sure it does what you want it to.
So we might roll-your-own more things. But then we'll have a tremendous amount of code duplication, effectively, and bigger tech debt issues, minus the diamond dependency hell issue. It might be better this way; time will tell.
Claude Plus suggests VSCode.
Claude Pro suggests emacs.
Claude Pro asks you about your preferences and needs instead of pushing an opinionated solution?
You won't get caught if you write something yourself and use it yourself, but programmers (contrary to entrepreneurs) have a pattern of avoiding illegal things instead of avoiding getting caught.
I caught iOS trying to autocorrect something I wrote twice yesterday, and somehow before I hit submit it managed it a third time, and I had to edit it after, where it tried three more times to change it back.
Autocorrect won’t be happy until we all sound like idiots and I wonder if that’s part of how they plan to do away with us. Those hairless apes can’t even use their properly.
The vercel dominance is one I don't understand. It isn't reflected in vercel's share of the deployment market, nor is it one that is likely overwhelming prevalent in discourse or recommended online (possible training data). I'm going to guess it's the bias of most generated projects being JS/TS (particularly Next.js) and the model can't help but recommend the makers of Next.js in that case.
Redux is boring tech and there is a time and place for it. We should not treat it as a relic of the past. Not every problem needs a bazooka, but some problems do so we should have one handy.