Top
Best
New

Posted by zdw 4 days ago

AI is a horse (2024)(kconner.com)
446 points | 217 commentspage 5
6stringmerc 13 hours ago|
Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.

My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.

pixl97 12 hours ago|
>LLMs do not have that at all so the analogy fails.

I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.

https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...

>Horses can survive on their own.

Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?

taneq 15 hours ago||
I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.
echelon 16 hours ago||
This micro blog meta is fascinating. I've seen small micro blog content like this popping up on the HN home page almost daily now.

I have to start doing this for "top level"ish commentary. I've frequently wanted to nucleate discussions without being too orthogonal to thread topics.

zhoujing204 12 hours ago||
"It is not possible to do the work of science without using a language that is filled with metaphors. Virtually the entire body of modern science is an attempt to explain phenomena that cannot be experienced directly by human beings, by reference to forces and processes that we can experience directly...

But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally." -- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.

Here are something I generated with Gemini:

1. Sentience and Agency

The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear. The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.

2. Scalability and Replication

The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses. The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.

3. The Velocity of Evolution

The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology. The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.

4. Contextual Understanding

The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists. The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.

5. Responsibility

The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable. The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.

gyanchawdhary 15 hours ago||
this post is aging like milk
dana321 10 hours ago|
https://www.isclaudecodedumb.today/
gyanchawdhary 8 hours ago||
Wao
dangoodmanUT 12 hours ago||
Damn that’s clever
brador 16 hours ago||
If an AI aims at the thing we call it hallucinations, when humans do it we call the delusion goal setting.

Either way it is an imagined end point that has no bearing in known reality.

deafpolygon 17 hours ago||
Or your typical American teenager.
MORPHOICES 10 hours ago|
[dead]