1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc
2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.
3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.
I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.