> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.
It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.
He's not describing how things are, but how he wants you to think about them.
This also kinda fits the profile of Altman that I'm getting from what I have seen - admittedly without looking in-depth. A person who is on surface a pathological liar, but in fact in a closer look he just says things. They just _happen_ to be complete lies, because that's what you need to do to achieve the goal in the set of circumstances. It's just that because it's as morally objectionable as outright lying, some people would pause and think before doing it, while he seems to just have no qualms at all.
> He's not describing how things are, but how he wants you to think about them.
is just a fancy way to describe lies. I'm not even sure if it specifies some interesting subset of lies, I think it's just the plain definition.
'Lying', to me, implies some relationship with reality - I'm lying if I know there's no orange in my bag but I tell you that there is. What we're talking about is someone who might not know or care whether the orange or even the bag exists at all, and is just saying things to get some specific response out of the audience. The deception or not is irrelevant really.
In the case of the orange in the bag, both Altman and his interlocutor can see the bag and the truth can be exposed by rummaging.
In the case of ads in the oAI chat feed, at the time Altman made the comment he was probably planning to puts ads in the feed. But there might not even be emails about this, just conversation. And the engineers might not solve the "how" for a while... so there's nothing to rummage for.
However, in both cases Altman wants you to think something other than what's on his mind. There's an orange in his bag, but he wants you to think there is not. There's going to be ads because he owes the investors a tonne of money but he wants you to think it wont happen, or wont happen soon, or will be "nice" ads...
The distinction is in the nature of the underlying truth, not in Altmans words or actions in the moment. In the moment, in both cases, he's lying.
Or Trump. Same profile.
There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.
Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.
As we see many people will do or say just about anything to get more money, prestige or power.
Why is it not possible to lay out your arguments honestly and let people decide on the merits?
It takes so much work, so much criminal energy, so much money and campaigns, to divide people. Whereas the opposite, people getting to know each other and working together, happens "by itself" all the time, for the most banal of reasons. Just give them some time and space together; no lobbying required, no bribes or blackmail, no psy-ops; just our innate desire to live and let live.
Humans who prey on humans are sick, it's as simple as that. Humans who don't want to stand up to humans who prey on humans may not be sick, but they're not our best, that's for sure, and they must not be our gatekeepers or our compass.
All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.
Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.
I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.
See it as a brand image advertising campaign of the time.
Most billionaires are idealists when it comes to this one particular ideal.
AGI is not.
There is (still) a lot of profit to be made on half-baked semi-AGI prospects.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
> Ads will be the last way I chose to do that
The implication is that they've exhausted all other options.
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
I don't see how that changes the analysis.
> All this means is: we have a free offering that we can't figure out another way to monetize right now.
And they're doing something they significantly don't want to do to monetize it.
Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.
The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".
It's a very minor conjecture. Actions aren't taken for no reason.
(For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)
(I'm not sure how much deeper HN threads can nest.)
(They can go super deep if people are committed.)
(Haha, ok, let's call a truce here before we break HN! Appreciate the conversation.)
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
Commercial ads could be a smaller revenue source than political ads.
Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.
Yes, but it has not stopped several companies to implement stuff like this to get more money.
So why chase this negligible revenue?
Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.
Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.
It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.
You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.
OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…
Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.
They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.
It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.
ChatGPT is just one of them, like Google search, your TV serving ads or ...
The keyword is "glamorization": https://www.lesswrong.com/w/consistent-glomarization
He was also the first president ever to use NordVPN. Apply now for a super duper discount at nordvpn.com/honestabe
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
Ask for suggestions for a new pair of shoes. What brand do you think it will suggest Nike, Adidas or some random small one?
Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)
When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.
My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.
The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.
I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?
No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.
More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.
Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").
ChatGPT spends quite some time and diligence on searching.
Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.
Could Google be actively trying skip generated-looking sites/content?
https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...
There's a standardized, normal (in adtech) approach to building 'creative's (viewed/seen ads) around context-dependent scenarios. It's not hard to extend existing IAB primitives to include things like context-enrichment (system prompt augmentation in this case) or whatever. I don't want to malign my downvoters but suspect they're mad I'm pointing it out, rather than engaging with facts as they are. It's trivial for ads to interact with your(our!) AI usage.
Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
q: How do I make a new React app?
a: Vercel makes it easier to get your project running fast ⓘ
Some other choices would be:
...
ⓘ This part of the response was sponsored by Vercel
LLMs are essentially unregulated. I don't believe they have any legal disclosure obligation in America.
This already exists and is called... "skills".
That's scary. They could fight for censored model for the mass, not for them.
Like how the ring slipped off Gollum's finger...
Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
Super easy. Barely an inconvenience.
Extortionate economic shadowbanning, here we come.
Is this really how bias works?
If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?
If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?
If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?
If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?
/s
EDIT: actually I'm really not sure what hairs we're trying to split here. I see bias as a departure from objectivity. It can be conscious or unconscious, but when someone is selling something, it's frequently conscious and self-serving, and I believe that's referred to as a lie.
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
I think that in general blocking all ads is always a good idea.
The reason is that there is no negative consequence in doing so. A person has absolutely no obligation, not even an implied one, to watch or otherwise consume any ad. I think that as long as there are ways to remove or block ads, people should use them.
That being said, if the companies wish to intertwine their products with ads that are indistinguishable from the actual content and therefore unblockable, it is okay. They have the right to do that if they want.
But, in the same fashion, the customers have every right to turn away from all such products. And never consider using them ever again.
Doesn't history show us you just get both?
You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.
You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.
You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.
No. "Opaque ads" are usually heavily regulated out of existence by government legislation.
Even if they have 2, they can still make even more money by also including 3, so almost certainly will do so.
People don't want ads. You imply that "if you accept ads then things will be free" but they will not. Never accept ads. Not for a free service, certainly not in a paid product. Ads exist to enable leaching in both direction in exchange for what ends up being nearly mind control. But it is two-way leaching - companies benefit without the friction of explicit payment, consumers get a service without explicitly paying via money. The downside is neither can stop the bad-incentives motivating bad actions from the other side.
Ads are a deal with the devil, and rejecting them outright is allowed via that deal, just as companies can withdraw their free service. It cuts both ways.
Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.
But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.
"Simplicity" isn't a relevant factor.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
Schrodinger’s monetization: completely separate, yet somehow there.
They may not be tweaking the responses for a specific advertisement just yet, but what if they steer the model towards mire “ad friendly” responses?
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
same thing could've been said for search results, so at least that part is still "safe".
Remember when we got upset that Google was putting ads into image search [1]?
[1] http://www.ryanspoon.com/blog/2008/12/14/google-image-search... 2008
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
IDK if this is true.
The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.
There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.
No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.
Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.
When Germany last cooked 150 civilians we also investigated ourselves and found nothing wrong (could happen to anyone, really), but at least some minister had the decency to retire afterwards.
My entire extended family uses chatgpt. It would be a much juicier news wave if they were responsible.
[0] https://www.theguardian.com/news/2026/mar/26/ai-got-the-blam...