Posted by elffjs 9 hours ago
Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
[1] https://www.anthropic.com/news/google-broadcom-partnership-c...
Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.
On the other hand, it’s increasing Google’s investment in AI, in general.
So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.
Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.
Maybe a little bit of both.
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
That kind of insane growth & demand is unprecedented at that scale.
https://www.anthropic.com/news/google-broadcom-partnership-c...
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.
Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.
That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.
It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.
It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.
It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.
Have you shipped anything commercially viable because of AI or are you/we just keeping up?
Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.
And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.
Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.
It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.
It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.
We replaced an expensive, proprietary vendor product in a couple of weeks.
I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.
The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.
I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.
No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.
We are definitely reaching the point where you need an LLM to deal with the onslaught of LLM-generated content, even if the humans are being judicious about editing everything. We're all just cranking on an inhumanly massive amount of output and it's frankly scary.
I presume I'm not the only one.
Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
With good management you will get great work faster.
The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.
Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.
That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.
If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.
I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.
edit: LOL called it, a bunch of useless garbage that no one really cares about but used to justify corporate jobs programs.
This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".
I run a team and am spending my time/tokens on serious pain points.
This is in a real-time stateful system, not a system where I'd necessarily expect the exact same thing to happen every time. I just wanted to understand why it behaved differently because there wasn't any obvious reason, to me, why it would.
The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out. The narrative touched a lot of code, and the source references it provided did an excellent job of walking me through the narrative. I independently validated the explanation using some telemetry data that the LLM didn't have access to. It was correct. This would have taken me a very long time to work out by hand.
Edit: I have done this multiple times and have been blown away each time.
This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?
For me, the answer is yes.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.
And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?
A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.
well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.
I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.
You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.
Are they peeking over the shoulder of each team and individual? Of course not.
It can be the case that the spend is absolutely wasteful. Numbers don’t lie.
Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.
Most firms are not a google or a Microsoft - a firms cash balance can become a strategic weapon in the right environment. So wasting money is not a great idea. Lest we forget dividends.
Moreover if you have a budget set re. Spend on tokens - you have rationing. Therefore the firm should be trying to get the most out of token spend. If you are wasting tokens on stuff that doesn’t create a benefit financially for the firm then indeed it is not inline with proper corporate financial theory.
Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?
> “Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.”
This isn’t meaningful criticism. This is a vacuous “those guys are so dumb”.
[waits for chickens to come home to roost]
"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"
I wonder what I’m doing differently.
I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?
This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.
It's now trivial to fix these problems while still doing our day jobs -- shipping a product.
it's trivial to reimplement a better solution.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.
My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.
I other news, TQQQ is pretty high!
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.
Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?
(Run Rate = Revenue in Period / # of Days in Period x 365)
It's a forecast.
(That said, their numbers are much realer than that.)
That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.
(I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.
Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem
But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?
Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.
No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.
Although I doubt this will stop them if they think it’s advantageous…
US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...
As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.
ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.
And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.
But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.
The US understands that and allows it to happen as the former yields a compounding effect of power.
European states certainly don't get this.
Airbus ?
lol
Now, that’s a name I haven’t heard in a long time.
couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.
Running at a loss long enough to kill the competition is basically the name of the game these days.
When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.
Buwahahahahahahahhahah
They drop a little cash on some shitcoin the president controls and those problems go away.
This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.
You know what's also really hard in a vacuum? Dissipating heat.
Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.
At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.
If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.
Saudi will host the biggest data centers in the world
I really couldn't have been more obscure, could I? :P
In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)
In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.
To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)
[1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company
[2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...
[3] The alternate hypothesis is it's at distribution.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.
The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.
If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.
If we get to an end-state of monopoly/duopoly at this game, then we are truly screwed.
I was just stating my current use and revenue path. Anthropic has insane velocity, in April of 2026.
I think Deepseek is already there.
100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!
It's like insane hype marketing speak. "insanely agile products delivered" like huh?
I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.
I really hope that we can all laugh at how wrong I was.
However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.
Who could call me a starry-eyed idealist? I have invested in bunkers.
I hate money.
You know what I hate even more? Being the supposed "smart one," and having to borrow money from my entire family to get through my health issues.
I will never do that again, hopefully.
Like ex-developer turned PM who is now vibe coding everything they can and thinks it's the greatest thing ever.
To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?
I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.
It's tedious because the insistence doesn't seem to be matched by much observable change.
So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.
The question is when will we get there.
If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.
What are you counting in this category?
My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
AI company revenues aren't driven by consumer subscriptions.
The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.
If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.
They should stop messing with us then. Stealth model changes, threatening to take code away on the $20 plan, the list goes on.
How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.
If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.
How many businesses are paying Ford $10 million per annum?
Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..
It's quite a scary thought exercise.
Computer costs keep collapsing. Image and audio generation is turned out to be less computer intensive than text (lol).
First company to launch 24/7 customized streaming AI slop wins!
$1k for a lot of developers here is totally worth it.
anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.
Microsoft is in the same boat with Azure.
Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?
If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.
The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.
Example. Them doing a AB test where they remove Claude CLI from the 20$ pro plan ... they rolled it back now. Other rate limits where they publicly double your quota at NON peak times but lower it during peak quietly. These are tacky and signs of panic.
One such issue is experimentation. But when you see back to back issues, it looks odd.
What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?
If you’re using it for personal work, why is $100 worth it?
It took about two weeks of really running it through its paces, and constantly slamming against the limit on it to convince me I had to upgrade to at least the 100/month sub, and at this point I wouldn’t blink to bump that to the 200/month if necessary.
I 100% believe we’re in a bubble, and that this level of compute isn’t sustainable at this price point, but for as long as I have it, I’m going to run it at the redline.
I’m a solo dev working on a project that I’ve just gone full-time on, after about 1.5 years of part time work. It’s a codebase that I laid the groundwork in, and has very well established systems, standards, and constraints.
The work I’m using Claude to do is the exact work I would be doing myself, but it does it at somewhere in the neighborhood of 5-10x the pace I could have. I don’t know that I could get the same rate of production if I managed a team of 2-3 programmers. Right now, it’s literally almost perfect at taking my iterative suggestions, and implementing them at that accelerated pace.
Honestly the hardest part is dealing with the fact that at the end of the day, I have to understand this codebase perfectly (solo dev and all that), so I have to take in changes to it that are also 5-10x the rate my normal intuition would. But, again, the plus side is that it’s implementing them essentially exactly as I would have, as it has ~20k lines of code that I wrote to use as an example.
If I were to hire even one other programmer, I’d be paying well north of 5k/month, and I’d not only be managing a super computer programmer tool, but an actual human being as well. $100/month might as well be free comparatively.
I pay for my own AI provider subscriptions because keeping work and personal strictly separated is important for me. I do know some people who secretly pay $200/month for Claude and use it at their job even though it's not approved. I do not recommend doing that, but it shows that some people value this for their work.
For developers earning more than $10K per month, spending less than 1% of salary on tooling to make the job easier is easy to justify.
You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.
"Disney cancels $1B deal with OpenAI after video platform Sora is shut down: 'The future is human'" https://finance.yahoo.com/sectors/technology/articles/disney...
And if I recall correctly the AI datacenter deal isn'tdoing Oracle stock any favours.
And it may very well be bad news for OpenAI.
I have feeling that Dario is not the type of man who would want to be acquired and then have Google's CEO telling him what to do.
The drama on HN alone would last for days. Twitter would implode in on itself.
OpenAI crashing would be good news and bad news for Anthropic investors.