However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
They could also just burn their cash. Because they aren’t making any money paying someone to grow corn for them unless they own the land and have some private buyers lined up.
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
> Hint: It doesn't work that way.
I mean... technically it would work this way but, and this is a big but, reality is extremely complicated and a model that can actually be a reliable formula has to be extremely complicated. There's almost certainly no globally optimal solutions to these types of problems, not to mention that the solution space is constantly changing as the world does. I mean this is why we as humans and all animals work in probabilistic frameworks that are highly adaptable. Human intuition. Human ingenuity. We simply haven't figured out how to make models at that level of sophistication. Not even in narrow domains! What AI has done is undeniably impressive, wildly impressive even. Which is why I'm so confused why we embellish it so much.It's really easy to think everything is easy when we look at problems from 40k feet. But as you come down to Earth the complexity exponentially increases and what was a minor detail is now a major problem. As you come down resolution increases and you see major problems that you couldn't ever see from 40k feet.
As a researcher, I agree very much with you. And as an AI researcher one of the biggest issues I've noticed with AI is that they abhor detail and nuance. Granted, this is common among humans too (and let's not pretend CS people don't have a stereotype of oversimplification and thinking all things are easy). While people do this frequently they also don't usually do it in their niche domains, and if they are we call them juniors. You get programmers thinking building bridges is easy[0] while you get civil engineers thinking writing programs is easy. Because each person understands the other's job only at 40k feet and are reluctant to believe they are standing so high[1]. But AI? It really struggles with detail. It really struggles with adaptation. You can get detail out but it often requires significant massaging and it'll still be a roll of the dice[2]. You also can get the AI to change course, a necessary thing as projects evolve[3]. Anyone who's tried vibe coding knows the best thing to do is just start over. It's even in Anthropic's suggestion guide.
My problem with vibe coding is that it encourages this overconfidence. AI systems still have the exact same problem computer systems do: they do exactly what you tell them to. They are better at interpreting intent but that blade cuts both ways. The major issue is you can't properly evaluate a system's output unless you were entirely capable of generating the output. The AI misses the details. Doubt me? Look at Proof of Corn! The fred page is saying there's an API error. The sensor page doesn't make sense (everything there is fine for an at home hobby project but anyone that's worked with those parts knows how unreliable they are. Who's going to do all the soldering? You making PCBs? Where's the circuit to integrate everything? How'd we get to $300? Where's the detail?). Everything discussed is at a 40k foot view.
[0] https://danluu.com/cocktail-ideas/
[1] I'm not sure why people are afraid of not knowing things. We're all dumb as shit. But being dumb as shit doesn't mean we aren't also impressive and capable of genius. Not knowing something doesn't make you dumb, it makes you human. Depth is infinite and we have priorities. It's okay to have shallow knowledge, often that's good enough.
[2] As implied, what is enough detail is constantly up for debate.
[3] No one, absolutely nobody, has everything figured out from the get-go. I'll bet money none of you have written a (meaningful) program start to finish from plans, ending up with exactly what you expect, never making an error, never needing to change course, even in the slightest.
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.
There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
Good point. AI is already making regular Joes into software engineers.
Management is so confident in this, they are axing developers/not hiring new ones.>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
Family of farmers here.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
Pedantically, that's what a farmer does. The workers are known as farmhands.
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
TL:DR, why are you gatekeeping this so hard?
I'll see if my 6 year old can grow corn this year.
Sure..put it in Kalshi while your at it and we can all bet on it.
I'm pretty sure he could grow one plant with someone in the know prompting him.
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!
We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".
Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.
My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).
Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)
> I'm about to lease some acreage at {address near you} and willing to pay {competitive rate} to hire someone to work that land for me, are you interested?
I see no reason why that couldn't eventually succeed. I'm sure that being an out-of-state investor who doesn't have any physical hands to finalize the deal with a handshake is an impediment, but with enough tokens, Farmer Fred could make 100,000 phone calls and send out 100,000 emails to every landowner and work-for-hire equipment operator in Iowa, Texas, and Argentina by this afternoon. If there exists a human who would make that deal, Fred can eventually find them. Seth would be limited in his chance to succeed in these efforts because he can only make one 1-minute phone call per minute, Fred can become as many callers as Anthropic owns GPUs.
I do find it amusing that Fred currently shows the following dashboard:
Iowa
HOLD
0°F
Unknown (API error)
Fred's Thinking: “Iowa is frozen solid. Been through worse. We wait.”
Fred is here
South Texas
HOLD
0°F
Unknown (API error)
Fred's Thinking: “South Texas is frozen solid. Been through worse. We wait.”
Argentina
HOLD
0°F
Unknown (API error)
Fred's Thinking: “Argentina is frozen solid. Been through worse. We wait.”
Any human Fred might call in the Argentinian summer or 70F South Texas winter weather is not going to gain confidence when Fred tries to build rapport through some small talk about the unseasonably cold weather...Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)
I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.
That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.
Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."
Let's step back.
"there's a gap between digital and physical that AI can't cross"
Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?
Your brain is trapped in its skull. How does it do anything physical?
With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!
The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.
An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.
Claude: Go to the owner of the building and say "if you tell me the height of your building I will give you this fine barometer."
The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.
Would it be fair if AI was used to play these markets too, or in parallel?
It would be interesting to see how different "varieties" of corn perform under the same calendar season.
Corn, nothing but corn as the actual standard of value :)
You don't get much any way you look at it for your $12.99 but it's a start.
Making a batch of popcorn now, I can already smell the demand on the rise :)
1. Do some research (as it's already done)
2. Rent the land and hire someone to grow the corn
3. Hire someone to harvest it, transport it, and store it
4. Manage to sell it
Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.
But,
"I will buy fucking land with an API via my terminal"
Who has multiple millions of dollars to drop on an experiment like that?
Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.
> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.
Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.
To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.
The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.
He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.
I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.
Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop
To make this a full AI experiment, emails to this inbox should be fielded by Claude as well.
(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)
(and if you've never played it: https://www.decisionproblem.com/paperclips/index2.html )
"Thinking quickly, Dave constructs a homemade megaphone, using only some string, a squirrel, and a megaphone."
Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.
And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.