I used to tell my Into-to-Programming-in-C course students, 20 years ago, that they could in principle skip one or two of the homework assignments; and that some students even manage to outsmart us and submit copied work as homework, but - they would just not become able to program if they don't do their homework themselves. "If you want to be able to write software code you have to exercise writing code. It's just that simple and there's no getting around it."
Of course not every discipline is the same. But I can also tell you that if you want to know, say, history - you have to memorize accounts and aspects and highlights of historical periods and processes, and recount them yourself, and check that you got things right. If "the AI" does this for you, then maybe it knows history but you don't.
And that is the point of homework (if it's voluntary of course).
To become upper management, just steal other peoples work.
Either way it is an imagined end point that has no bearing in known reality.
But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally." -- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.
Here are something I generated with Gemini:
1. Sentience and Agency
The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear. The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.
2. Scalability and Replication
The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses. The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.
3. The Velocity of Evolution
The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology. The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.
4. Contextual Understanding
The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists. The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.
5. Responsibility
The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable. The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.