Posted by mooreds 6 days ago
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.
I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.
There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.
This pattern has repeated enough times to make me highly skeptical of any such claims.
It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.
But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…
Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.
You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.
This is human psychology at work.
We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.
It's not that it can't happen. It obviously can. I'm more talking about the human belief that it will happen, and in our lifetime. It probably won't.
The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.
In fact, _anything_ could happen. Past performance does not guarantee future results.
If you need cognitive behavioral therapy, fine.
But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.
Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.
> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop
And yet there _will_ (eventually) be one generation that is right.
Most likely outcome would be that humans evolve into something altogether different rather than go extinct.
Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.
Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.
The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.
It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.
It's quite telling how much faith you put in humanity though, you sound fully bought in.
Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.
> AI was a huge mistake. It shows a lack of imagination and confuses ideas from different sciences. Instead of helping us improve how we communicate, it reinforces our weakest habits. Computer science pretends to make things more efficient, but really it just extracts value in shallow ways. This is poor, second-rate thinking.
Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.
Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.
Let's take one component of Microsoft Office. Microsoft Word is seen as a tool for people to write nicely formatted documents, such as books. Reports produced with Microsoft Word are easy to find, and I've even read books written in it. Comparing reports written before the advent of WYSIWYG word processing software like Microsoft Word with reports written afterwards, the difference is easy to see; average typewriter formatting is really abysmal compared to average Microsoft Word formatting, even if the latter doesn't rise to the level of a properly typeset book or LaTeX. It's easy to point at things in our world that wouldn't exist without WYSIWYG word processors, and that's been the case since Bravo.
LLMs are seen as, among other things, a tool for people to write software with.
Where is the software that wouldn't exist without LLMs? If we can't point to it, maybe they don't actually work for that yet. The claim I'm questioning is that, "within tech, there seem to have been explosive changes and development of new products."
What new products?
I do see explosive changes and development of new spam, new YouTube videos, new memes (especially in Italian), but those aren't "within tech" as I understand the term.
I think there is a lot of potential, outside of the direct generation of software but still maybe software-adjacent, for products that make use of AI agents. It's hard to "generate" real world impact or expertise in an AI system, but if you can encapsulate that into a function that an AI can use, there's a lot of room to run. It's hard to get the feedback loop to verify this and most of these early products will likely die out, but as I mentioned, agents are still new on the timeline.
As an example of something that I mean that is software-adjacent, have a look at Square AI, specifically the "ask anything" parts: https://squareup.com/us/en/ai
I worked on this and I think that it's genuinely a good product. An arbitrary seller on the Square platform _can_ do aggregation, dashboarding, and analytics for their business, but that takes time and energy, and if you're running a business it can be hard to find that time. Putting an agent system in the backend that has access to your data, can aggregate and build modular plotting widgets for you, and can execute whenever you ask it a question is something that objectively saves a seller's time. You could have made such a thing without modern LLMs, but it would be substantially more expensive in terms of engineering research, time, and effort to put together a POC and bring it production, making it a non-starter before [let's say] two years ago.
AI here is fundamental to the product functioning, but the outcome is a human being saving time while making decisions about their business. It is a useful product that uses AI as a means to a productive end, which, to me, should be the goal of such technologies.
That doesn't entail that current AI is useless! Or even non-revolutionary! But it's a different kind of software development revolution than what I thought you were claiming. You seem to be saying that the relationship of AI to software development is similar to the relationship of the Japanese language, or raytracing, or early microcomputers to software development. And I thought you were saying that the relationship of AI to software development was similar to the relationship of compilers, or open source, or interactive development environments to software development.
It also doesn't entail that six months from now AI will still be only that revolutionary.
If you look at, e.g. this clearly vibe-coded app about vibe coding [https://www.viberank.app/], ~280 people generated 444.8B tokens within the block of time where people were paying attention to it. If 1000 tokens is 100 lines of code, that's ~444M lines of code that would not exist otherwise. Maybe those lines of code are new products, maybe they're not, maybe those people would have written a bunch of code otherwise, maybe not. I'd call that an explosion either way.
So, AI is to software what muscle cars were to air emissions quality?
A whole lot of useless, unabated toxic garbage?
Where is the software that wouldn't exist without LLMs?
Where are the books that wouldn't exist without Microsoft Word?I've been reading Darwen & Date lately, and they seem to have done the typesetting for the whole damn book in Word—which suggests they couldn't get anyone else to do it for them and didn't know how to do a good job of it. But they almost certainly couldn't have gotten a major publisher to publish it as a mimeographed typewriter manuscript.
Your turn.
maybe they don't actually work for that yet.
So you're not going to see code that wouldn't exist without LLMs (or books that wouldn't exist without Word), you're going to see more code (or more books).There is no direct way to track "written code" or "people who learned more about their hobbies" or "teachers who saved time lesson planning", etc.
Generally, when there's a new tool that actually opens up explosive changes and development of new products, at least some of the people doing the exploding will tell you about it, even if there's no direct way to track it, such as Darwen & Date's substandard typography. It's easy to find musicians who enthuse about the new possibilities opened up by digital audio workstations, and who are eager to show you the things they created with them. Similarly for video editors who enthused about the Video Toaster, for programmers who enthused about the 80386, and electrical engineers who enthused about FPGAs. There was an entire demo scene around the Amiga and another entire demo scene around the 80386.
Do people writing code with AI today have anything comparable? Something they can point to and say, "Look! I wrote this software because AI made it possible!"?
It's easy to answer that question for, for example, visual art made with AI.
I'm not sure what you mean about "accelerating technologies". WYSIWYG word processors today are about the same as Bravo in 01979. HTML is similar but both better and worse. AI may have a hard takeoff any day that leaves us without a planet, who knows, but I don't think that's something it has in common with Microsoft Word.
Books written with WYSIWYG could have been written by hand just fine, it would have just been more painful and taken longer. What WYSIWYG unlocks is more books, not new kinds of books. And sure, you might argue that more books is new books, which is fair.
So it is with LLMs. We're going to get more code, more lesson plans, etc. Accelerating.
Do people writing code with AI today have anything comparable?
Like every fourth post on here is someone talking about their workflow with LLMs, so... I think they do?AI slop is a product
[1] https://www.economist.com/briefing/2025/07/24/what-if-ai-mad...
Also, the Economist publishes all articles anonymously so the individual author isn't known. As far as I know, they do this so we take all articles and opinions as the perspective of the Economist publication itself.
If it was disagreeing with AI maximalists, it was primarily in terms of the timeline, not in terms of the outcomes or inevitability of the scenario.
> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.
It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.
There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.
This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.
> From an economic viewpoint, this also means that the brand value of the articles remains with the masthead rather than the individual authors. This commodifies the authors and makes then more fungible.
> Being The Economist, I am sure they are aware of this.
People on HN do not engage in discussion with different opinions on certain topics and prefer to avoid disagreement on those topic.
I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).
I’m curious about exploring the topics “What if the war in Ukraine ends in the next 12 months” just as much as “What if the war in Ukraine keeps going for the next 10 years”, doesn’t mean I expect both to happen.
The paper:
https://thedocs.worldbank.org/en/doc/d6e33a074ac9269e4511e5d...
"Differences about the future of AI are often partly rooted in differing interpretations of evidence about the present. For example, we strongly disagree with the characterization of generative AI adoption as rapid (which reinforces our assumption about the similarity of AI diffusion to past technologies)."
Through this lens it's way more normal
They are deterministic, and they follow clear rules, but they can't represent every number with full precision. I think that's a pretty good analogy for LLMs - they can't always represent or manipulate ideas with the same precision that a human can.
They're a fixed precision format. That doesn't mean they're ambiguous. They can be used ambiguously, but it isn't inevitable. Tools like interval arithmetic can mitigate this to a considerable extent.
Representing a number like pi to arbitrary precision isn't the purpose of a fixed precision format like IEEE754. It can be used to represent, say, 16 digits of pi, which is used to great effect in something like a discrete Fourier transform or many other scientific computations.
In practice, outcome of floating point computation depends on compiler optimizations, order of operations, and rounding used.
1. Compiler optimizations can be disabled. If a compiler optimization violates IEEE754 and there is no way to disable it, this is a compiler bug and is understood as such.
2. This is as advertised and follows from IEEE754. Floating point operations aren't associative. You must be aware of the way they work in order to use them productively: this means understanding their limitations.
3. Again, as advertised. The rounding mode is part of the spec and can be controlled. Understand it, use it.
LLMs are machine learning models used to encode and decode text or other-like data such that it is possible to efficiently do statistical estimation of long sequences of tokens in response to queries or other input. It is obvious that the behavior of LLMs is neither consistent nor standardized (and it's unclear whether this is even desirable---in the case of floating-point arithmetic, it certainly is). Because of the statistical nature of machine learning in general, it's also unclear to what extent any sort of guarantee could be made on the likelihoods of certain responses. So I am not sure it is possible to standardize and specify them along the lines of IEEE754.
The fact that a forward pass on a neural network is "just deterministic matmul" is not really relevant.
We only have two states of causality, so calling something "just" deterministic doesn't mean much, especially when "just random" would be even worse.
For the record, LLMs in the normal state use both.
1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.
2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.
3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.
And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.
That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.
Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.
People use whatever UI comes with their computer. I don't think that's going to change.
We had countless breathless articles about free will at the time, and though this has now decreased, the discourse is still warped by claims of 'PhD-level intelligence'.
The backlash isn't against LLMs, it's against lies.
They just can't shut up about how AI is going to either save us all or kill us all.
Whether that is due to AI, WFH->offshoring, or end of ZIRP is anybody's guess.
All I know is any tech meetups I go to are full of people looking for work and the recruiters that normally stop by have vanished.
Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.
May I introduce you to the magic of "KPI" and "Bonus tied to performance"?
You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.
If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.
Ironically the outro of a YouTube video I just watched. I'm just a few hundred ms of latency away from being a cyborg.
I guess I've always been more of a "work to live" type.
e.g. Alexa for voice, REST for talking to APIs, Zapier for inter-app connectdness.
(not trying to be cynical, just pointing out that the technology to make it happen doesn't seem to be the blocker)
REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?
From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.
I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click
I imagine people eventually would switch on some simple programming and/or language for this, and world would be way more efficient compared to spreadsheet mess
It does not do that tho. Like, reliably doing a repeatable set of steps is a thing it is not doing.
It does fuzzy tasks well.