Top
Best
New

Posted by mooreds 6 days ago

AI might yet follow the path of previous technological revolutions(www.economist.com)
182 points | 295 comments
djoldman 6 days ago|
https://archive.ph/NOg8I
wvbdmp 6 days ago||
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.

Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.

ehnto 5 days ago||
I don't think LLMs are building towards an AI singularity at least.

I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.

I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.

michaelhoney 5 days ago|||
True, but our "training" has been a billion years of evolution and multimodal input every waking moment of our lives. We come heavily optimised for reality.
ACCount37 5 days ago|||
I see no reason why not.

There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.

This pattern has repeated enough times to make me highly skeptical of any such claims.

It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.

measurablefunc 5 days ago|||
If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.

¹https://gemini.google.com/share/d9b505fef250

wvbdmp 5 days ago|||
I agree that the mere theoretical possibility isn’t sufficient for the argument, but you’re missing the much less refutable component: that the inevitability is actively driven by universal incentives of competition.

But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…

measurablefunc 5 days ago|||
My current guess is ecological collapse & increasing frequency of system shocks & disasters. Basically Blade Runner 2049 + Children of Men type of outcome.
marcus_holmes 5 days ago|||
None of them.

Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.

You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.

This is human psychology at work.

dcanelhas 5 days ago|||
If you look at timescales large enough you will find that plenty of extinction level events actually do happen (the anthropocene is right here).

We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.

marcus_holmes 4 days ago||
Obviously this is all true. There was an event in the 5th century that meant we had no summer and all crops failed for 5 years, we all almost starved then. And that was only the most recent of these types of events.

It's not that it can't happen. It obviously can. I'm more talking about the human belief that it will happen, and in our lifetime. It probably won't.

potsandpans 5 days ago|||
"nothing ever happens."

The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.

In fact, _anything_ could happen. Past performance does not guarantee future results.

If you need cognitive behavioral therapy, fine.

But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.

Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.

> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop

And yet there _will_ (eventually) be one generation that is right.

chrisco255 5 days ago||
> And yet there _will_ (eventually) be one generation that is right.

Most likely outcome would be that humans evolve into something altogether different rather than go extinct.

toss1 5 days ago||
The Fermi Paradox might want to have a word here...

Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.

chrisco255 5 days ago||
The Fermi Paradox overestimates the likelihood of intelligent life outside of earth. We haven't even found hard evidence of life anywhere outside of our planet. There's not even a verifiably hospitable planet for water-based lifeforms anywhere within dozens of lightyears from earth. Even if a hospitable planet exists within a range we can one day get to, unless it has the same volcanic properties and makeup as earth, it's most probable that life itself never even developed there.

Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.

The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.

jackphilson 5 days ago||||
Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.
measurablefunc 5 days ago||
The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.
jrave 5 days ago||
maybe you misread the post you‘re answering to here or are you suspecting sarcasm? the poster commended your usage of the footnote with the gemini convo as far as i can tell?
measurablefunc 5 days ago||
Laid it on a little too thick to be sincere & more generally I don't comment on internet forums to be complimented on my response style. Address the substance of my arguments or just save yourself the keystrokes.
jackphilson 5 days ago|||
It was a compliment and I was hoping to nudge the behavior of other HN comments.
andrepd 5 days ago|||
If you really can't see the irony of using AI to make up your thoughts on AI then perhaps there's keystrokes to be saved on your end as well.
measurablefunc 5 days ago||
I recommend you address the content & substance of the argument in any further responses to my posts or if you can't do that then figure out a more productive way to spend your time. I'm sure there is lots of work to be done in automated theorem proving.
jjk166 5 days ago||||
I'm pretty sure a lot of work has gone into making institutions resistant to a potential future super-Hitler. Whether those efforts will be effective or not, it is a very real concern, and it would be absurd to ignore it on the grounds of "there is probably some limit to tyranny we're not yet aware of which is not too far beyond what we've previously experienced." I would argue a lot more effort should have gone into preventing the original Hitler, whose rise to power was repeatedly met with the chorus refrain "How much worse can it get?"
imtringued 5 days ago|||
This isn't just an AI thing. There are a lot of of non-constructive ideologies like communism where simply getting rid of "oppressors" will magically unleash the promised utopia. When you give these people a constructive way to accomplish their goals, they will refuse, call you names and show their true colors. Their criticism is inherently abstract and can never have a concrete form, which also makes it untouchable by outside criticism.
tempodox 5 days ago|||
We’ll manage to make our own survival on this planet less probable, even without the help of “AI”.
chrisco255 5 days ago|||
I don't know what reality you're living in, but there are more people on this planet than ever in history and most of them are quite well fed.
jebarker 5 days ago||
And they have nuclear weapons and technology that may be destabilizing the ecosystem that supports their life.

It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.

johnnienaked 5 days ago|||
The cult of efficiency will end in the only perfectly efficient world--one without us.
Bendy 5 days ago||
I’m fed up of hearing that nonsense, no it won’t. Efficiency is a human-defined measure of observed outcomes versus desired outcomes. This is subject to change as much as we are. If we do optimize ourselves to death, it’ll be because it’s what we ultimately want to happen. That may be true for some people but certainly not everyone.
johnnienaked 5 days ago|||
The equilibrium of ecology, without human interference, could be considered perfect efficiency. It's only when we get in there with our theories about mass production and consumption that we muss it up. We seem to forget that our well-being isn't self-determined, but dependent on the environment. But, like George Carlin said, "the Earth isn't going anywhere...WE ARE!"

It's quite telling how much faith you put in humanity though, you sound fully bought in.

scotty79 5 days ago|||
I think the concern is that humans have very poor track record of defining efficiency let alone implementing solutions that serve it.
mallowdram 5 days ago|||
The singularity will involve quite a bit more complexity than binary counting, arbitrary words and images, and prediction. These were mirages that will be wiping out both Wall Street and our ecology.
Mallowram 5 days ago|||
[dead]
aaron695 5 days ago||
[dead]
mmargenot 5 days ago||
At least within tech, there seem to have been explosive changes and development of new products. While many of these fail, things like agents and other approaches for handling foundation models are only expanding in use cases. Agents themselves are hardly a year old as part of common discourse on AI, though technologists have been building POCs for longer. I've been very impressed with the wave of tools along the lines of Claude Code and friends.

Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.

andy99 5 days ago||
Its probably cliche but I think it's both overhyped and under hyped, and for the same reason. They hype comes from "leadership" types that don't understand what LLMs actually do and so imagine all sorts of nonsense (replacing vast swaths of jobs or autonomously writing code) but don't understand how valuable a productivity enhancer and automation tool to can be. Eventually hype and reality will converge, but unlike e.g. blockchain or even some of the less bullshit "big data" and similar trends, there's no doubt that access to an LLM is a clear productivity enhancer for many jobs.
mallowdram 5 days ago|||
AI was a colossal mistake. A lazy primate's total failure of imagination. It conflated the "conduit metaphor paradox" from animal behavior with "the illusion of prediction/error prediction/error minimization" from spatiotemporal dynamical neuroscience with complete ignorance of the "arbitrary/specific" dichotomy in signaling from coordination dynamics. AI is a short cut to nowhere. It's an abrogation of responsibility in progress of signaling that required we evolve our lax signals that instead doubles down on them. CS destroys society as a way of pretend efficiency to extract value from signals. It's deeply inferior thinking.
joquarky 4 days ago||
Let me use AI to translate this into plain English:

> AI was a huge mistake. It shows a lack of imagination and confuses ideas from different sciences. Instead of helping us improve how we communicate, it reinforces our weakest habits. Computer science pretends to make things more efficient, but really it just extracts value in shallow ways. This is poor, second-rate thinking.

mallowdram 4 days ago||
It lacks references, it's garbage, advertising, cliff-notes for apes uninterested, devolving, asleep, bored, and needing to be told what to think without knowing why or how. The inertia in CS, and the inertia and entropy CS unleashed on the gen public will take years to cleanse from the system before we get back to imaginative progress and invention.
kragen 5 days ago|||
What new non-AI products do you think wouldn't have existed without current AI? Because I don't see the "explosive changes and development of new products" you'd expect if things like Claude Code were a major advance.
spicyusername 5 days ago|||
At the moment, LLM products are like Microsoft Office, they primarily serve as a tool to help solve other problems more efficiently. They do not themselves solve problems directly.

Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.

Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.

kragen 5 days ago||
Where are the other problems that are being solved more efficiently? If there's an "explosive change" in that, we should be able to see some shrapnel.

Let's take one component of Microsoft Office. Microsoft Word is seen as a tool for people to write nicely formatted documents, such as books. Reports produced with Microsoft Word are easy to find, and I've even read books written in it. Comparing reports written before the advent of WYSIWYG word processing software like Microsoft Word with reports written afterwards, the difference is easy to see; average typewriter formatting is really abysmal compared to average Microsoft Word formatting, even if the latter doesn't rise to the level of a properly typeset book or LaTeX. It's easy to point at things in our world that wouldn't exist without WYSIWYG word processors, and that's been the case since Bravo.

LLMs are seen as, among other things, a tool for people to write software with.

Where is the software that wouldn't exist without LLMs? If we can't point to it, maybe they don't actually work for that yet. The claim I'm questioning is that, "within tech, there seem to have been explosive changes and development of new products."

What new products?

I do see explosive changes and development of new spam, new YouTube videos, new memes (especially in Italian), but those aren't "within tech" as I understand the term.

mmargenot 5 days ago|||
I do agree that there's a lot of garbage and navel-gazing that is directly downstream from the creation of LLMs. Because it's easier to task and evaluate an LLM [or network of LLMs] with generation of code, most of these products end up directly related to the production of software. The professional production of software has definitely changed, but sticky impact outside of the tech sector is still brewing.

I think there is a lot of potential, outside of the direct generation of software but still maybe software-adjacent, for products that make use of AI agents. It's hard to "generate" real world impact or expertise in an AI system, but if you can encapsulate that into a function that an AI can use, there's a lot of room to run. It's hard to get the feedback loop to verify this and most of these early products will likely die out, but as I mentioned, agents are still new on the timeline.

As an example of something that I mean that is software-adjacent, have a look at Square AI, specifically the "ask anything" parts: https://squareup.com/us/en/ai

I worked on this and I think that it's genuinely a good product. An arbitrary seller on the Square platform _can_ do aggregation, dashboarding, and analytics for their business, but that takes time and energy, and if you're running a business it can be hard to find that time. Putting an agent system in the backend that has access to your data, can aggregate and build modular plotting widgets for you, and can execute whenever you ask it a question is something that objectively saves a seller's time. You could have made such a thing without modern LLMs, but it would be substantially more expensive in terms of engineering research, time, and effort to put together a POC and bring it production, making it a non-starter before [let's say] two years ago.

AI here is fundamental to the product functioning, but the outcome is a human being saving time while making decisions about their business. It is a useful product that uses AI as a means to a productive end, which, to me, should be the goal of such technologies.

kragen 5 days ago||
Yes, but I'm asking about new non-AI products. I agree that lots of people are integrating AI into products, which makes products that wouldn't have existed otherwise. But if the answer to "where's the explosive changes and development of new products?" is 100% composed of integrating AI into their products, that means current AI isn't actually helping people write software, much. It's just giving them more software to write.

That doesn't entail that current AI is useless! Or even non-revolutionary! But it's a different kind of software development revolution than what I thought you were claiming. You seem to be saying that the relationship of AI to software development is similar to the relationship of the Japanese language, or raytracing, or early microcomputers to software development. And I thought you were saying that the relationship of AI to software development was similar to the relationship of compilers, or open source, or interactive development environments to software development.

It also doesn't entail that six months from now AI will still be only that revolutionary.

mmargenot 5 days ago||
For better or for worse, AI enables more, faster software development. A lot of that is garbage, but quantity has a quality all its own.

If you look at, e.g. this clearly vibe-coded app about vibe coding [https://www.viberank.app/], ~280 people generated 444.8B tokens within the block of time where people were paying attention to it. If 1000 tokens is 100 lines of code, that's ~444M lines of code that would not exist otherwise. Maybe those lines of code are new products, maybe they're not, maybe those people would have written a bunch of code otherwise, maybe not. I'd call that an explosion either way.

dgfitz 4 days ago||
> For better or for worse, AI enables more, faster software development.

So, AI is to software what muscle cars were to air emissions quality?

A whole lot of useless, unabated toxic garbage?

spicyusername 5 days ago|||

    Where is the software that wouldn't exist without LLMs?
Where are the books that wouldn't exist without Microsoft Word?
kragen 5 days ago||
I've definitely read a lot of books that wouldn't exist without WYSIWYG word processors, although MacWrite would have done just as well. Heck, NaNoWriMo probably wouldn't.

I've been reading Darwen & Date lately, and they seem to have done the typesetting for the whole damn book in Word—which suggests they couldn't get anyone else to do it for them and didn't know how to do a good job of it. But they almost certainly couldn't have gotten a major publisher to publish it as a mimeographed typewriter manuscript.

Your turn.

spicyusername 5 days ago||
My point is that these are accelerating technologies.

    maybe they don't actually work for that yet.
So you're not going to see code that wouldn't exist without LLMs (or books that wouldn't exist without Word), you're going to see more code (or more books).

There is no direct way to track "written code" or "people who learned more about their hobbies" or "teachers who saved time lesson planning", etc.

kragen 5 days ago||
You must have failed to notice that you were replying to a comment of mine where I gave a specific example of a book that I think wouldn't exist without Word (or similar WYSIWYG word processors), because you're asserting that I'm never going to see what I am telling you I am currently seeing.

Generally, when there's a new tool that actually opens up explosive changes and development of new products, at least some of the people doing the exploding will tell you about it, even if there's no direct way to track it, such as Darwen & Date's substandard typography. It's easy to find musicians who enthuse about the new possibilities opened up by digital audio workstations, and who are eager to show you the things they created with them. Similarly for video editors who enthused about the Video Toaster, for programmers who enthused about the 80386, and electrical engineers who enthused about FPGAs. There was an entire demo scene around the Amiga and another entire demo scene around the 80386.

Do people writing code with AI today have anything comparable? Something they can point to and say, "Look! I wrote this software because AI made it possible!"?

It's easy to answer that question for, for example, visual art made with AI.

I'm not sure what you mean about "accelerating technologies". WYSIWYG word processors today are about the same as Bravo in 01979. HTML is similar but both better and worse. AI may have a hard takeoff any day that leaves us without a planet, who knows, but I don't think that's something it has in common with Microsoft Word.

spicyusername 4 days ago||
I noticed.

Books written with WYSIWYG could have been written by hand just fine, it would have just been more painful and taken longer. What WYSIWYG unlocks is more books, not new kinds of books. And sure, you might argue that more books is new books, which is fair.

So it is with LLMs. We're going to get more code, more lesson plans, etc. Accelerating.

    Do people writing code with AI today have anything comparable? 
Like every fourth post on here is someone talking about their workflow with LLMs, so... I think they do?
apwell23 5 days ago|||
> What new non-AI products do you think wouldn't have existed without current AI?

AI slop is a product

kragen 5 days ago||
You mean, like, SEO? It's a product in the same sense that perchloroethylene-contaminated groundwater is a product of dry-cleaning plants.
boringg 5 days ago|||
I think the payment model is still not there which is making everything blurry. Until we figure out how much people have to pay to use it and all the services built on its back it will remain challenging to figure out full value prop. That and a lot of company are going to go belly up when they have to start paying the real cost instead of growth acquisition phase.
jebarker 5 days ago||
I don’t think a payment model can be figured out until the utility of the technology justifies the true cost of training and running the models. As you say, right now it’s all subsidized based on the belief it will become drastically more useful. If that happens I think the payment model becomes simple.
mmargenot 5 days ago||
There's enough solid FOSS tooling out there between vLLM and Qwen3 Apache 2.0 models that you can get a pretty good assistant system running locally. That's still in the software creation domain rather than worldwide impact, but that's valuable and useful right now.
mallowdram 5 days ago||
The immaterial units are arbitrary, so 'agents' are themselves arbitrary, ie illusory. They will not arrive except as being wet nursed infinitely. The developers neglected to notice the fatal flaw, there are specific targets but automating the arbitrary never reaches them, never. It's an egregious monumental fly in the ointment.
ranger207 6 days ago||
AI being normal technology would be the expected outcome, and it would be nice if it just hurried up and happened so I could stop seeing so much spam around AI actually being something much greater than normal technology
jcranmer 6 days ago||
Well, for starters, it would make The Economist's recent article on "What if AI made the world's economic growth explode?" [1] look like the product of overly credulous suckers for AI hype.

[1] https://www.economist.com/briefing/2025/07/24/what-if-ai-mad...

jaredklewis 6 days ago||
This comment reminds me of the forever present HN comments that take a form like "HN is so hypocritical. In this thread commenters are saying they love X, when just last week in a thready about Y, commenters were saying that they hated X."
kamikazeturtles 5 days ago|||
All articles published by the Economist are reviewed by its editorial team.

Also, the Economist publishes all articles anonymously so the individual author isn't known. As far as I know, they do this so we take all articles and opinions as the perspective of the Economist publication itself.

janalsncm 5 days ago|||
Even if articles are reviewed by their editors (which I assume is true of all serious publications) they are probably reviewing for some level of quality and relevance rather than cross-article consistency. If there are interesting arguments for and against a thing it’s worth hearing both imo.
m_fayer 5 days ago||||
I’m pretty sure the “what if” in that article was meant in earnest. That article was playing out a scenario, in a nod to the ai maximalists. I don’t think it was making any sort of prediction or actually agreeing with those maximalists.
jcranmer 5 days ago|||
It was the central article of the issue, the one that dictated the headline and image on the cover for the week, and came with a small coterie of other articles discussing the repercussions of such an AI.

If it was disagreeing with AI maximalists, it was primarily in terms of the timeline, not in terms of the outcomes or inevitability of the scenario.

AnIrishDuck 5 days ago||
This doesn't seem right to me. From the article I believe you are referencing ("What if AI made the world’s economic growth explode?"):

> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.

It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.

There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.

This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.

naasking 5 days ago||
I think any improvements to productivity AI brings will also create uncertainty and disruption to employment, and maybe the latter is greater than the former, and investors see that.
tootie 5 days ago|||
And a tacit admission that absolutely nobody knows for sure what will happen so maybe let's just game out a few scenarios and be prepared.
shoo 5 days ago|||
re: Why are The Economist’s writers anonymous?, Frqy3 had a good take on this back in 2017:

> From an economic viewpoint, this also means that the brand value of the articles remains with the masthead rather than the individual authors. This commodifies the authors and makes then more fungible.

> Being The Economist, I am sure they are aware of this.

https://news.ycombinator.com/item?id=14016517

cgh 5 days ago||
Quite a cynical perspective. The Economist’s writers have been anonymous since the magazine’s founding in 1843. In the 19th century, anonymity was normal in publications like this. Signing one’s name to articles was seen as pretentious.
watwut 5 days ago|||
I will bite here. It is completely valid comment. It points out to the fact that seeming consensus in this thread can not be taken as a sign that there is actually a consensus.

People on HN do not engage in discussion with different opinions on certain topics and prefer to avoid disagreement on those topic.

some_guy_nobel 5 days ago|||
Well, it's better the same publication publish views contradicting their past than never changing their views with new info.
Kurtz79 5 days ago|||
I don’t see anything inherently wrong in a news site reporting different views on the same topic.

I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).

gizajob 6 days ago|||
If you back every horse in a race, you win every time.
svara 6 days ago|||
I'm perfectly happy reading different, well-argued cases in a magazine even if they contradict each other.
Mallowram 5 days ago|||
[dead]
gyomu 6 days ago||
Why would you expect opinion pieces from different people to agree with one another?

I’m curious about exploring the topics “What if the war in Ukraine ends in the next 12 months” just as much as “What if the war in Ukraine keeps going for the next 10 years”, doesn’t mean I expect both to happen.

buu700 5 days ago||
To add to your point, both article titles are questions that start with "What if". The same person could have written both and there would be no contradiction.
1vuio0pswjnm7 5 days ago||
"So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as "normal technology"."

The paper:

https://thedocs.worldbank.org/en/doc/d6e33a074ac9269e4511e5d...

"Differences about the future of AI are often partly rooted in differing interpretations of evidence about the present. For example, we strongly disagree with the characterization of generative AI adoption as rapid (which reinforces our assumption about the similarity of AI diffusion to past technologies)."

redwood 6 days ago||
I think the "calculator for words" analogy is a good one. It's imperfect since words are inherently ambiguous but then again so is certain forms of digital numbers (floating point anyone?).

Through this lens it's way more normal

sfpotter 6 days ago||
Floating point numbers aren't ambiguous in the least. They behave by perfectly deterministic and reliable rules and follow a careful specification.
solid_fuel 5 days ago|||
I understand what you're saying, but at the same time floating point numbers can only represent a fixed amount of precision. You can't, for example, represent Pi with a floating point. Or 1/3. And certain operations with floating point numbers with lots of decimals will always result in some precision being lost.

They are deterministic, and they follow clear rules, but they can't represent every number with full precision. I think that's a pretty good analogy for LLMs - they can't always represent or manipulate ideas with the same precision that a human can.

sfpotter 5 days ago||
It's no more or less a good analogy than any other numerical or computational algorithm.

They're a fixed precision format. That doesn't mean they're ambiguous. They can be used ambiguously, but it isn't inevitable. Tools like interval arithmetic can mitigate this to a considerable extent.

Representing a number like pi to arbitrary precision isn't the purpose of a fixed precision format like IEEE754. It can be used to represent, say, 16 digits of pi, which is used to great effect in something like a discrete Fourier transform or many other scientific computations.

tintor 5 days ago||||
In theory, yes.

In practice, outcome of floating point computation depends on compiler optimizations, order of operations, and rounding used.

sfpotter 5 days ago||
None of this is contradictory.

1. Compiler optimizations can be disabled. If a compiler optimization violates IEEE754 and there is no way to disable it, this is a compiler bug and is understood as such.

2. This is as advertised and follows from IEEE754. Floating point operations aren't associative. You must be aware of the way they work in order to use them productively: this means understanding their limitations.

3. Again, as advertised. The rounding mode is part of the spec and can be controlled. Understand it, use it.

GMoromisato 6 days ago|||
So are LLMs. Under the covers they are just deterministic matmul.
sfpotter 5 days ago|||
The purpose of floating point numbers it to provide a reliable, accurate, and precise implementation of fixed-precision arithmetic that is useful for scientific calculations and which has a large dynamic range, which is also capable of handling exceptional states (1/0, 0/0, overflow/underflow, etc) in a logical and predictable manner. In this sense, IEEE754 provides a careful and precise specification which has been implemented consistently on virtually every personal computer in use today.

LLMs are machine learning models used to encode and decode text or other-like data such that it is possible to efficiently do statistical estimation of long sequences of tokens in response to queries or other input. It is obvious that the behavior of LLMs is neither consistent nor standardized (and it's unclear whether this is even desirable---in the case of floating-point arithmetic, it certainly is). Because of the statistical nature of machine learning in general, it's also unclear to what extent any sort of guarantee could be made on the likelihoods of certain responses. So I am not sure it is possible to standardize and specify them along the lines of IEEE754.

The fact that a forward pass on a neural network is "just deterministic matmul" is not really relevant.

Chinjut 5 days ago||||
Ordinary floating point calculations allow for tractable reasoning about their behavior, reliable hard predictions of their behavior. At the scale used in LLMs, this is not possible; a Pachinko machine may be deterministic in theory, but not in practice. Clearly in practice, it is very difficult to reliably predict or give hard guarantees about the behavioral properties of LLMs.
Workaccount2 5 days ago||||
Everything is either deterministic, random, or some combination.

We only have two states of causality, so calling something "just" deterministic doesn't mean much, especially when "just random" would be even worse.

For the record, LLMs in the normal state use both.

Mallowram 5 days ago||
[dead]
mhh__ 6 days ago|||
And at scale you even have a "sampling" of sorts (even if the distribution is very narrow unless you've done something truly unfortunate in your FP code) via scheduling and parallelism.
jamesjyu 5 days ago||
I think a better term is "word synthesizer"
heresie-dabord 5 days ago|||
What do you think of "plausibility hallucinator"? ^_^
lucideng 5 days ago||
This gave me a good chuckle.
redwood 5 days ago|||
That focuses more on the outputs than the inputs tho. Close but needs something
pessimizer 6 days ago||
I've come to the conclusion that it is a normal, extremely useful, dramatic improvement over web 1.0. It's going to

1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.

2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.

3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.

And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.

That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.

Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.

fsloth 5 days ago|
"everybody can move to a Linux desktop now"

People use whatever UI comes with their computer. I don't think that's going to change.

tim333 5 days ago||
That theory was tried when Walmart sold Linux computers but it didn't work. People returned them because they couldn't run their usual software - Excel and the like.
cainxinth 5 days ago||
Here’s what amazes me about the reaction to LLMs: they were designed to solve NLP, stunningly did so, and then immediately everyone started asking why they can’t do math well or reason like a human being.
Peritract 5 days ago||
LLMs were pitched as 'genuinely intelligent' rather than 'solving NLP'.

We had countless breathless articles about free will at the time, and though this has now decreased, the discourse is still warped by claims of 'PhD-level intelligence'.

The backlash isn't against LLMs, it's against lies.

aNoob7000 5 days ago|||
Because the heads of tech companies jumped on TV and said that AGI was around the corner to basically prepare for job losses.

They just can't shut up about how AI is going to either save us all or kill us all.

joquarky 4 days ago|||
Well the job losses have certainly arrived.

Whether that is due to AI, WFH->offshoring, or end of ZIRP is anybody's guess.

All I know is any tech meetups I go to are full of people looking for work and the recruiters that normally stop by have vanished.

lukev 5 days ago|||
The VC economy depends on a hype cycle. If one doesn't exist, they'd manufacture one (see web 3.0), but LLMs were perfect.
nielsbot 5 days ago||
maybe a classic case of the sales team selling features you haven't built yet
Kapura 6 days ago|
Digital spreadsheets (excel, etc) have done much more to change the world than so-called "artificial intelligence," and on the current trajectory it's difficult to see that changing.
thepryz 6 days ago||
I don’t know if I would agree.

Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.

Swizec 6 days ago|||
> Spreadsheets don’t really have the ability to promote propaganda and manipulate people

May I introduce you to the magic of "KPI" and "Bonus tied to performance"?

You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.

Tarsul 6 days ago|||
social media ruined our brains long before LLMs. Not sure if the LLM-upgrade is is all that newsworthy... Well, for AI fake videos maybe - but it could also be that soon no one believes any video they see online which would have the adverse effect and could arguably even be considered good in our current times (difficult question!).
CuriouslyC 5 days ago|||
Agents are going to change everything. Once we've got a solid programmatic system driving interface and people get better about exposing non-ui handles for agents to work with programs, agents will make apps obsolete. You're going to have a device that sits by your desk and listens to you, watches your movements and tracks your eyes, and dispatches agents to do everything you ask it to do, using all the information it's taking in along with a learned model of you and your communication patterns, so it can accurately predict what you intend for it to do.

If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.

jrm4 5 days ago|||
This isn't likely to happen for roughly the same reason Hypercard didn't become the universal way for novices to create apps.
CuriouslyC 5 days ago||
I probably spend 80% of my time in front of a computer driving agents, challenge accepted :)
lordhumphrey 5 days ago||
Marshall McLuhan called, he said to ask yourself, who's driving who?
CuriouslyC 5 days ago||
"We shape our tools, and therefore, our tools shape us."

Ironically the outro of a YouTube video I just watched. I'm just a few hundred ms of latency away from being a cyborg.

hn_acc1 5 days ago||||
So basically, the "ideal" state of a human is to be 100% active driving agents to vibe code whatever you need, based on every movement, every thought? Can our brains even handle having every thought being intentional and interpreted as such without collapsing (nervous breakdown)?

I guess I've always been more of a "work to live" type.

coke12 5 days ago||
Consider that a subset of us programmer types pride themselves on never moving their hands off the keyboard. They are already "wired in" so to speak.
alexpotato 5 days ago|||
The technology for this has been around for the past 10 years but it's still not a reality, what makes AI the kicker here?

e.g. Alexa for voice, REST for talking to APIs, Zapier for inter-app connectdness.

(not trying to be cynical, just pointing out that the technology to make it happen doesn't seem to be the blocker)

CuriouslyC 5 days ago||
Alexa is trash. If you have to basically hold an agent's hand through something or it either fails or does something catastrophic nobody's going to use or trust it.

REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?

AbstractH24 5 days ago|||
The likely outcome is LLMs being the next the iteration of Excel

From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.

I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click

riku_iki 5 days ago|||
> I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click

I imagine people eventually would switch on some simple programming and/or language for this, and world would be way more efficient compared to spreadsheet mess

watwut 5 days ago|||
> From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.

It does not do that tho. Like, reliably doing a repeatable set of steps is a thing it is not doing.

It does fuzzy tasks well.

tim333 5 days ago|||
Terminator 2 would have been a dull movie if the opposition had been a spreadsheet.
naasking 5 days ago|||
Artificial intelligence has solved protein folding. The downstream effects of that alone will be huge, and it's far from the only change coming.
player1234 5 days ago||
Hyperbole
naasking 5 days ago||
What's hyperbolic, that protein folding is solved, or that it's going to be significant?
micromacrofoot 5 days ago||
hah, just wait until everything you ever do online is moderated through an LLM and tell me that's not world changing
More comments...