Wrong. While table formatting can confuse an LLM in some cases, a natural language output in pure text is almost always better than JSON for small amounts of data. After all, LLMs have more natural language training data than JSON training data.
The fallacy that LLMs need machine readable outputs just because they're machines is pervasive and it's a huge misconception about the way these models work.
On the other hand, I agree that large amounts of data should be outputted in a machine readable way so that the LLM can run scripts over it for more advanced parsing.
Check out this paper - https://arxiv.org/abs/2506.13405
> After all, LLMs have more natural language training data than JSON training data.
While that is true, data also doesn't usually look like natural language (i.e. a collection of financial records). And when it does (i.e. a collection of chat messages), I wonder if it's more confusing if it's unstructured, even if small.
I expect most frontier models to handle these cases just fine either way, so it may largely depend on context—specifically, how much there is, and where the attention shakes out. Ultimately, a claim one way or the other, for something this context-dependent, would have to be backed up by a lot of testing and would probably conclude that, "in most cases, you should do this"
The Unix philosophy of small, composable tools is still valid in the era of stochastic machines!
I think `--yes` or `--yes-do-the-dangerous-thing` is leagues better.
(Oh, and there's no shorthand, like `-c`. It's `--commit` or bust.)
The naked "-Force" has always been a mistake on even minimally complex tools.
Too many tools stray so wildly from UNIX principles. If we design for agents first we will likely see more and more of this.
Let the Agent use the CLI and if it guesses the wrong option, you make that the RIGHT option.
Every time it doesn't guess something right, you change it.
Now you've wasted context on, what? Learning how to use the tool. And it will waste context on it every single time. (You can write skills to mitigate this a bit, but still).
The alternative is to make the tool work as the user (an LLM in this case) expects it to work, without having to resort to the manual.
This sounds backwards and presumes that the statistics machines which are LLMs are getting it right when they "average" out to the wrong command. No, fix the agents behavior, dont change the CLI to accommodate it.
I find it so much more successful to have an agent interact with a CLI than an API or MCP. I can just ask: query my dev DB for an ideal URL to test a new page. It'll find the right users, resources, etc and create an excellent test URL to quickly validate the behavior of my changes. I can have it get the latest spec from Confluence, or find the latest PR build for a workitem.
If you have an API, you should really look at providing a CLI for it too.
Plugging my tools/examples:
- https://github.com/pseudosavant/confluence-fetch
Why not just do the "mycli skill-path" idea from the article, and skip the rest? Basically:
1. Add regular, for-humans-or-programs flags and modes to your CLI as single-purpose, composable features (otherwise known as "how we've always added lots of features to a CLI without legislating a particular use-case"). Doing this in a messy way makes a messy CLI, same as it ever was. Don't do it in a messy way.
2. When requested, have the CLI itself, or its manual/website, puke out a skill file which directs agents in how to compose those things for likely LLM uses of the CLI. Talking hardcoded, static text here. Nothing crazy.
In other words, a "manpage for LLMs" or "manpage-as-skill" option. That's a lot more flexible and easier to change and update than an entire made-for-LLMs behavior layer. So you'd have "man mytool" and "skill mytool" available as separate documents, emphasizing separate capabilities of the same underlying CLI. "skill mytool" would be for use by LLMs or for piping "skill mytool > SKILL.md" or whatever.
This is a little bit analogous to Git's notion of "porcelain" and "plumbing" (not that Git's a particularly sterling example of composable, friendly UX). The composable or special-case-only APIs still exist for direct use, are dogfooded internally for the human-user-intended paths, and a pre-baked document exists directing LLMs/users in how to use those lower-level details effectively.
Sure, LLMs can read your manpage/helpdoc, or website, or source code, and figure things out, but that's slow and costs tokens and command-approval loops. This is a marginal efficiency proposal at best, but hopefully one that discourages people from writing bimodal, tortured CLIs just for the sake of LLM-friendliness.
Is that nuts?
I was really suprised today. We at adaptive [1], is an access management platform to access psql, mysql, vms, k8s etc. When you use `adaptive connect <db-name>` it would connect create just-in-time tunnel and connect the user to the database. You cannot do traditional psql operation etc. That design is by choice.
Today I was trying to invoke it via claude, and, god damn, it found a way to connect. It create a pseudo shell in python, pass the queries and treat our cli like a tool. This would have been humanly not possible. Partly because, you would like about risks, good practice/bad practice, would be scared to execute and write code like that, and it just did it and acheived the goal.
expect(1) is 36 years old.
https://github.com/mvanhorn/cli-printing-press
He made a whole bunch of agent-friendly CLIs: https://printingpress.dev/
https://github.com/mvanhorn/printing-press-library/tree/main...
We spent a lot of time on when to run something as a shell command vs send it to an LLM. The hard lesson: false positives are much worse than false negatives. "git push --force" accidentally going to an LLM instead of executing is the kind of thing that kills user trust instantly. Our heuristic ended up very conservative.
The bigger surprise was the real-time visual indicator. We added a small color signal showing "this goes to shell" vs "this goes to agent" as you type, and it changed how people wrote more than anything else. Before it, people hedged natural language queries with shell-like syntax just in case. After it, they wrote normally.
On the isatty point — right for automation. But there's a third mode worth thinking about: "orchestrated interactive," where a human is watching the agent use your CLI and needs to step in. Pure non-interactive breaks that entirely.
A lack of structured output has never been a blocker for agents to work, that’s a traditional coding problem.
“Write good help text and error messages” is just good design which is self evident.