Top
Best
New

Posted by jakelsaunders94 9 hours ago

Is anybody else bored of talking about AI?(blog.jakesaunders.dev)
598 points | 410 commentspage 4
bilsbie 9 hours ago|
I’m confused why the hype and the investment got so high. And why everyone treats it like a race. Why can’t we gradually develop it like dna sequencing.
olivia-banks 9 hours ago||
To be fair, DNA sequencing was very hyped up (although not nearly as much as AI). The HGP finished two years ahead of schedule, which is sort of unheard of for something in it's domain, and was mainly a result of massive public interest about personalized medicine and the like. I will admit that a ton of foundational DNA sequencing stuff evolved over decades, but the massive leap forward in the early 2000s is comparable to the LLM hype now.
dylan604 9 hours ago||
I assumed it was obvious. Being first is all that matters. Investors don't want to invest in second place. Obviously, first is achieving AGI and not some GPT bot. That's why so many people keep saying AGI is in _____ weeks away with some even being preposterous stating AGI might have already happened. They need to keep attracting investors. Same as Musk constantly saying FSD is ____ weeks away.
mr_bob_sacamano 9 hours ago||
I wish there were a filter on Hacker News to hide all AI related posts.
erikerikson 9 hours ago|
This is hacker news. Somebody made that and uses it so they don't see this post to tell you about that but it exists.
mr_bob_sacamano 9 hours ago||
a Kafkaesque loop
erikerikson 8 hours ago||
https://news.ycombinator.com/item?id=35654401
abcde666777 6 hours ago||
The topic of AI triggers people in various ways - anxiety and uncertainty about the future, frustration with excessive hype, and the debate between people on each side of the fence.

It will calm down once the dust starts to settle and there's some kind of consensus on how the chips have fallen.

Also there is an irony that talking about being sick of talking about AI is still talking about AI.

qsera 2 hours ago|
>The topic of AI triggers people in various ways

The only thing that triggers me about it peoples inability to understand how a scam works, after falling for such scams the n-th time.

Hyperloop, ubeam, blockchain, Elon musk taking all to mars....

In these line of scams, LLMs are a wet dream...

matusp 9 hours ago||
Very much so. I wouldn't mind some interesting projects or results. But it's very basic opinions or parables all over again.
kkrish83 2 hours ago||
It's not just tech, I think a lot of the internet is just about one topic. Its a very fascinating topic but its taken over the zeitgeist and the world is becoming a pretty boring place
sbinnee 6 hours ago||
I like the analogy to woodwork and hammer. It fits perfectly to what happens when they do not pay enough attention to the end result. They are not showing the actual product because it is not as shiny as their new agentic hammer.
brandensilva 4 hours ago|
I think people are having a hard time figuring out use cases so yeah the AI is the most exciting part.
keithnz 9 hours ago||
No, well, I still enjoy the articles. The thing that always surprises me is the negativity in comment threads. I'm genuinely quite excited about AI based development. Yesterday I was playing around with developing a marketing plan for a market gap where we could leverage our product and finding what features in our product would need changing/adding to improve our offering. Quite interesting results!
jakelsaunders94 8 hours ago|
I think in most places on the internet the negative comments are the ones that will win out. Same for AI I suppose. I tried not to bemoan the whole concept here, just the amount of 'airtime' it gets. Sort of like when something happens in the news (lately it's been the Epstein files for me), and you wish you could see a more balanced picture of world events.
fragmede 6 hours ago||
Surround yourself with positive people. Reddit's take for an event I was at made it sound like it went terribly, but I was there and had fun.
1a527dd5 9 hours ago||
I don't think I'm quite bored. I'm exhausted/fatigued with the pace.
AStrangeMorrow 8 hours ago||
Yes it feels like a full time job just to try to keep up. And I’ve been in AI for close to 10 years so I feel like I have to keep up at least a minimum.

An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.

Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.

But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.

cmrdporcupine 9 hours ago||
This.

I don't know how I'm burnt out from making this thing do work for me. But I am.

xantronix 2 hours ago|||
Yeah I don't think humans were meant to create things faster than they could understand them, but, here we are
computerdork 8 hours ago|||
AI Fatigue seems real: https://www.businessinsider.com/ai-fatigue-burnout-software-...
vrganj 9 hours ago||
AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.

AI is the red herring that'll waste all our attention until it's too late.

lpcvoid 9 hours ago||
AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
tonmoy 9 hours ago|||
Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful
elbasti 9 hours ago|||
This is wrong. AI uses ~4% of the US grid, and projections are that it will grow to 10%+ in the next 6 years.

And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.

thethirdone 8 hours ago||
Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.

> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.

Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.

[0]: https://www.statista.com/statistics/183943/us-carbon-dioxide...

graypegg 8 hours ago|||
> Compare that to ~30% of all energy use for transportation

Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected. Literally no economic activity would happen without transportation, including the transition to green energy sources, of which would improve the emissions from transportation.

I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.

Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.

thethirdone 7 hours ago||
> Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.

40% is for energy use in the US in the form of electricity. It was a rough number that I pulled from my memory. It is roughly right though. Check https://www.eia.gov/energyexplained/us-energy-facts/

AI is not currently 4% of the energy market of the US. Only the grid. I should have been more clear about the ALL ENERGY vs GRID distinction.

> I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.

I really made no statement on the value of doing things. Transportation is obviously very valuable. I just wanted a more fact based conversation.

elbasti 7 hours ago|||
> Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.

I don't follow. The comparison is 30% of energy use for transportation vs 4% for AI, and soon 30% for transportation vs 10% for AI.

thethirdone 7 hours ago||
The grid is not all energy use. To get the numbers on an even playing field you need to compensate for that only ~40% of energy goes through the grid.

And that leaves a 6:1 ratio assuming projections run true. It very well might be possible to get efficiency wins from the transportation sector that outweigh growth in AI.

Insanity 9 hours ago||||
Pretty large amounts of energy go towards training large language models. Running them is also a non-negligible energy cost at scale.

But yeah, there's way worse industries out there when it comes to climate change impact.

datsci_est_2015 9 hours ago||||
? Am I misunderstanding the push for nuclear energy and record energy prices in locales with new “data centers”?
hirako2000 9 hours ago||||
Before large models things were starting to move to micro VM, lean hardware, firecracker cloud platforms running thin containers.

Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.

surgical_fire 8 hours ago|||
Which is why talk about AI datacenters typically involve energy supply constraints, and possibly the need to build power plants along with it.

It is, of course, because it barely uses any energy.

amelius 8 hours ago||||
> AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.

If you want to point at causes of climate change, look no further than adtech. It's the driving force behind our overconsumption.

And it has perhaps an even longer list of reasons to hate it.

bluefirebrand 7 hours ago||
AI and Adtech are the same damn industry
proc0 9 hours ago||||
People sure don't care about it anymore and it coincided with rise of AI. There's barely any mention of climate change compared to 5+ years ago. I really think this is all about how to keep the capitalist system from imploding because of so much debt (so the next big thing needs to happen to keep the growth).
sharemywin 9 hours ago||
climate change was an important issue when they were trying to peddle EVs and solar.
lpcvoid 8 hours ago||
They == the lizard people, I assume?
xvector 9 hours ago||||
Seeing this kind of populist misinformation/bikeshedding on HN is particularly disappointing.
lpcvoid 8 hours ago||
So then explain to me where I wrote misinformation?
mostertoaster 8 hours ago|||
The EPA repealed its 2009 conclusion that greenhouse gases warm the Earth and endanger human health and well-being.

So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.

Natural gas is nice though because it does pollute the air far less than coal.

You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.

A lot of people got very rich off the fear mongering from climate alarmists.

computerdork 8 hours ago||
Hmm, it seems pretty clear that climate is getting hotter, so it seems natural for some people to be worried about what will happen to the planet in a few decades (me for one).

And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?

... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!

mostertoaster 33 minutes ago||
It’s really about the cost/benefit analysis.

The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.

The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.

Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.

I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.

AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.

Sohcahtoa82 9 hours ago|||
> AI is fine. The hype is annoying.

I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"

In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.

We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.

[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.

onemoresoop 8 hours ago|||
Follow closely on what the detractors say. Most of them are using AI themselves and are just pushing back on the hype or other ludicrous claims and that's a good thing. Is the current crop of Gen AI anything near AGI? Is it worth the current valuation? Can a company fire most staff and run on gen AI? We may see the economy completely crash and not because AI takes over but because of bad investments, hype and greed.
xvector 2 hours ago||
The same detractors I know today that use AI, said that LLMs were useless slop generators that would never amount to anything just a year or two ago.

Detractors, doomers, and techno-pessimists have got to be the most consistently wrong group in history. https://pessimistsarchive.org/

beej71 9 hours ago||||
I don't think it's worthless. It can greatly speed up coding. And learning foreign languages. And many other things.

But I do think humanity is worse off because of it. So I'm a detractor in that way. :)

ben_w 6 hours ago||||
> Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.

Well, I wouldn't go that far, but the hallucinations have moved up to being about more complicated things than they used to be.

Also, I've seen a few recent ones that "think" (for lack of a better word) that they know enough about politics to "know" they don't need to search for current events to, for example, answer a question about the consequences of the White House threatening military action to take Greenland. (The AI replied with something like "It is completely inconceivable that the US would ever do this").

heavyset_go 8 hours ago||||
> certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?)

I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".

Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.

For example, I will bet that non-straw people will take issue with this, and for good reasons:

> Hallucinations are exceptionally rare now

doug_durham 8 hours ago||||
On reddit there are two sub-Reddits that are mirrors, /accelerate and /betteroffline. The people in the subs go there for dopamine hits. One for how AI is going to transform their lives and lead to a work-free future. The other how AI is worthless and how everyone (except them) is being fooled. They are the same people with opposite views. The people in either sub don't recognize this.
jarjoura 8 hours ago||||
You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.

This is such a perfect example of the mania behind this rollout.

There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.

Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.

xboxnolifes 8 hours ago|||
> You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.

If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.

Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.

lukevp 8 hours ago|||
And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.

It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.

jarjoura 6 hours ago||
I'm confused by your analogy. A wiki run server is extremely efficient to run, and can be hosted from a tiny little raspberry pie. A search engine can be optimized to provide results near O(1). You can even pull up and read results on a very old computer. All of the concerns around cost and resource efficiency can be addressed as all of this is a solved problem.

Even with an LLM agent getting cheaper to run in the future, it's still fundamentally non-deterministic so the ongoing cost for a single exploration query run can never get anywhere near as cheap as running a wiki with a proper search engine.

arcxi 8 hours ago||||
This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.

In contrast, what harm do those detractors cause? They don't generate as much code per hour?

xvector 8 hours ago||
By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!

The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)

dijit 8 hours ago|||
that’s a stretch and taking a measured approach to change is valid
arcxi 8 hours ago||||
> detractors slow the pace of progress

Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.

slopinthebag 8 hours ago|||
That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.

Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.

jackie293746 9 hours ago||||
Claude Opus 4.6 regularly makes up shit and hallucinates. I'm not a detractor by any means but "exceptionally rare" is fantasyland.
thrawa8387336 8 hours ago|||
Can vouch for this, plus, when it does work, stuff can take forever. Then, if I let it unsupervised, higher risk of doing the wrong thing. If I supervise it, then I become agent nanny.
surgical_fire 8 hours ago|||
I have been experiencing it too.

I honestly am finding Codex considerably better, as much as I despise OpenAI.

lovasoa 8 hours ago||||
I use the latest codex with gpt5.4 and Claude opus every day. they hallucinate every day. If you think they don't, you are probably being gaslighted by the models.
bigstrat2003 8 hours ago||||
> a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then.

On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.

Forgeties79 8 hours ago||||
This is going to sound flippant, but truly, I imagine most people find the group that disagrees with their take annoying as well.
SpicyLemonZest 8 hours ago||||
I personally believe that LLMs have advanced immeasurably since ChatGPT came out, which was itself a world-historical event. I use AI daily in ways that enhance my productivity.

I say all of that to establish that I'm not a reflexive critic when I tell you, hallucinations are absolutely not exceptionally rare now. On multiple occasions this week (and it's only Tuesday!) I've had to disprove a LLM hallucination at work. They're just not as fun to talk about anymore, both because they're no longer new and because straightforward guardrails are effective at blocking the funny ones.

surgical_fire 8 hours ago||||
The detractors are a lot less numerous and certainly a lot less preachy than the ones on the hype train.

AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.

I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.

The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.

heavyset_go 8 hours ago|||
> The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.

We're in "too big to fail" territory, if we handled the recession we were heading towards/in years ago, instead of letting AI hype distract and redirect massive amounts of investment, attention and labor from elsewhere, we might have been in a better position.

jarjoura 8 hours ago|||
On the flip side, if all this slop is floating around, and AI services do become untenable, think of all the immediate jobs that will open up to fix and maintain all the slop that's being thrown around right now. The millions of dollars of contracts spent to use these LLMs will be redirected back to hiring.

Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.

teaearlgraycold 8 hours ago|||
> Hallucinations are exceptionally rare now

The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.

I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.

01100011 9 hours ago|||
It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).
gastonf 8 hours ago|||
> If we get computers to think for us, we can solve a lot of our most pressing issues

If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?

JoshTriplett 8 hours ago||||
> It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.

All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.

xvector 8 hours ago||
What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.

If you want to "keep existing" AGI happening is probably your only hope.

JoshTriplett 8 hours ago|||
Aligned AGI, yes. Unaligned AGI is a fast way to die.

If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.

If you don't want to keep existing, that doesn't mean you get to risk the rest of us.

slopinthebag 8 hours ago|||
I highly doubt OP was talking about immortality
yladiz 8 hours ago||||
This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?
idle_zealot 8 hours ago|||
> If we get computers to think for us, we can solve a lot of our most pressing issues

How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?

dvt 9 hours ago|||
> incredible amounts of ... energy

So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.

[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...

vrganj 9 hours ago|||
It's not just the absolute expenditure. It's the type of expenditure.

https://www.selc.org/news/resistance-against-elon-musks-xai-...

dvt 9 hours ago||
[flagged]
triceratops 8 hours ago|||
Those links are about air pollution, not carbon emissions. You're engaging in some political posturing of your own.
dvt 7 hours ago||
Why are you lying? From literally the first paragraph of the CFR article:

> China is the world’s largest source of carbon emissions, and the air quality of many of its major cities fails to meet international health standards.

triceratops 6 hours ago|||
But the focus of the article is on China's air quality.

As for carbon emissions: https://news.ycombinator.com/item?id=45108292

And even though China emits more carbon annually than the US today, the US and Europe are still ahead in cumulative emissions: https://ourworldindata.org/grapher/cumulative-co2-emissions-.... Cumulative emissions are the carbon that's already in our atmosphere and causing heating today. If you want to apportion "blame" for climate change, then the US is 25% responsible, Europe is 30% responsible, and China is 14% responsible as of 2023. And India is only 3.6% responsible.

China's high emissions today power a manufacturing industry that has made cheap decarbonization via solar and batteries a realistic prospect. That's a much better use of their current emissions compared to what the developed countries do with theirs.

defrost 6 hours ago|||
China has a large population and does the dirty work of manufacturing for much of the rest of the entire world.

China has done more for renewable energy solutions than any other country, and their per capita population consumption patterns for personal are lower than many G20 countries.

In a fair representation of data, the total high carbon dioxide output from China should be assigned to source- the people across the globe with high personal consumption that have off shored their industry to China.

runarberg 8 hours ago|||
Interesting that you accuse your parent of political posturing at the end of your post, which indeed contains plenty political posturing.
runarberg 9 hours ago||||
1% of worldwide energy expenditure is massive, incredible amounts of energy in fact.
wrqvrwvq 6 hours ago|||
climate change is a hoax, but it's also disingenuous to pretend like ai delivers even an infinitesimal amount of the value of either agriculture or mining. Global population approaches zero without either of those things and if you deleted ai, no one would ever notice.
nisten 8 hours ago|||
In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent. Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.
sarchertech 8 hours ago||
> 30%

Thats not accurate. The estimate is about 2 billion in 25 years.

https://www.who.int/news-room/fact-sheets/detail/ageing-and-...

We also have models for how that works at a country level because we have countries that have far exceeded that.

And the vast majority of 60 year olds are still self sufficient and economically productive.

Average global retirement age is around 65 and in most countries it’s creeping towards 70. And percent of world population over 70 looks much more manageable over the time span we can realistically model.

oulipo2 9 hours ago|||
[flagged]
emp17344 9 hours ago|||
No, it’s… fine. Useful in a limited capacity. Not the machine god, but not machine Satan either. The reality is kind of boring.
vablings 9 hours ago|||
This summarizes mostly how I feel about it. It's a tool like any other tool we have advanced since the beginning of human civilization

Machine tools replaced blacksmiths

CNC machines replaced manual machines.

Robots replaced CNC machine tenders

CAD replaced draftsman (and also pushed that job onto engineers (grr))

P&P robots replaced human production lines.

The steam train replaced the horse and cart

This is a tale as old as time itself

datsci_est_2015 8 hours ago|||
What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.

Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.

cindyllm 8 hours ago||
[dead]
kerblang 9 hours ago|||
Those were deterministic rather than stochastic
bigstrat2003 7 hours ago|||
Exactly. People love to compare LLMs to power tools for carpenters and smiths. But if my miter saw had a 20% chance to produce cuts at a 45 degree angle when I have it set for 90, I would throw it out so fast I would leave Looney Tunes style tracks. A tool which only sometimes does its job is worse than no tool at all.
bitwize 8 hours ago|||
This isn't even our first AI hype cycle. That happened in the late 70s-80s. Every lab and agency needed Lisp machines to teach computers how to identify Russian missiles—or targets. The "GOFAI" techniques did not live up to the expectations of them, but they settled into niches where they were tremendously useful, and life went on. The same will happen with today's matmul-as-a-service AI.
steve_adams_86 9 hours ago||||
I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.

But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.

heavyset_go 9 hours ago|||
Capitalism is just feudalism that works for the merchant class
plagiarist 8 hours ago|||
Capitalism is feudalism but with raw generational wealth instead of generational wealth with divine right characteristics.
steve_adams_86 8 hours ago||
I see some overlap, but I think it's more complex than that. If we conflate the two so easily they lose meaning. Certainly, some people have that experience under capitalism. I think there are systemic failures which lead to life experiences that are probably not all that different from some peoples' experiences in feudal society, both at the top and bottom of the hierarchy.

The more I think about it though, I'm not sure feudalism is the right analogy. Serfs had a purpose and were depended upon. In a society where AGI is in the hands of a few, it seems reasonable to believe that there wouldn't be a need for serfs at all. Labour would become utterly irrelevant. You'd have no lord to be bound to. You'd be unnecessary.

I imagine the transition there would be some brutal form of capitalism, but the destination would not be fuedalism. I don't think we have a historical analog for that hypoethical destination.

vrganj 9 hours ago||||
If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.

Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.

I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...

mattgreenrocks 8 hours ago|||
Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
plagiarist 8 hours ago||
We will all be shocked at the rug pull after it has finished training on all our high-quality feedback for code it has written.
Human-Cabbage 8 hours ago|||
This is correct at the firm level and breaks down at the aggregate level, which is where it gets interesting.

At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.

You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.

brookst 9 hours ago|||
AI is more likely to destroy capitalism than it is to increase inequality.

Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.

In a year, only the very largest companies will pay for that kind of infrastructure tooling.

We’ve just started seeing the democratization of software and the capitalists are terrified.

plagiarist 8 hours ago||
I just don't know how to explain that you won't be destroying capitalism with AI. You have a subscription.
paulsutter 9 hours ago||
How did HN become this kind of website?
JoshTriplett 8 hours ago|||
Because AI is attacking, plagiarizing, competing with, and destroying the most common industry of people here on HN, so suddenly it mattered more to people who were previously unaffected.

Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.

emp17344 8 hours ago||||
The parent comment is a pretty measured take. What’s your problem with it?
iwontberude 9 hours ago|||
I went to a conference and people were suggesting nationalizing AI companies so it's basically everywhere.
tartoran 8 hours ago||
Same way we turned internet into a public utility? Wait, did we do that?
agentictrustkit 4 hours ago|
Initially I rally had a bad taste in my mouth. It had forced me to close a business (video editing). Recently its gone a different direction so I would say the "interest" part got a resurgence for me. I'm seeing all of theses tools, people, and systems promise "can do this..." and "can do that..." but because I have a background in trust law and trust creation I've looked at things differently.

I think the "can do" part gets boring but now I'm paralleling this to trust relationships and fiduciary responsiblities. What I mean is that we can not only instruct but then put a framework around an agent much like we do a trustee where they are compelled to act in the best interests of the beneficiaries (the human that created them in this case).

Anyway it's got me thinking in a different way.

remich 4 hours ago|
Fiduciary duty but for AI, interesting. I think there's some potential there, though of course you'll end up confronting the classic sci-fi trope of "what if the system judges what's best for the user in a way that is unexpected / harmful"? But, solve that with strong guardrails and/or scoping and you might have something.
More comments...