Posted by simonw 3 days ago
You can absolutely have a skill that tells the coding agent how to use Python with your preferred virtual environment mechanism.
I ended up solving that in a slightly different way - I have a Claude hook that spits attempts to run "python" or "python3" and returns an error saying "use uv run instead".
I hope such things will be standardized across vendors. Now that they founded the Agentic AI Foundation (AAIF) and also contributed AGENTS.md, I would hope that skills become a logical extension of that.
https://www.linuxfoundation.org/press/linux-foundation-annou...
I also have an open issue since months, which someone wrote a PR for (thanks") a few weeks ago.
Are you still comitted to that project?
Honestly the main problem has been that LLM's unique selling point back in 2024 was that it was the only tool taking CLI access to LLMs seriously. In 2025 Claude Code and Codex CLI etc all came along and suddenly there's not much unique about having a CLI tool for LLMs any more!
There's also a major redesign needed to the database storage and model abstraction layer in order to handle reasoning traces and more complex tool call patterns. I opened an issue about that here - it's something I'm stewing on but will take quite some work to get right: https://github.com/simonw/llm/issues/1314
I've been spending more of my time focusing on other projects which make use of LLM, in particular Datasette plugins that use the asyncio Python library: https://llm.datasette.io/en/stable/python-api.html#async-mod...
I expect those to drive some core improvements pretty soon.
Has anyone tested how well this works with code generation in Codex CLI specifically? The latency on skill registration could matter in a typical dev workflow.
I wrote about this but I'm certain that eventually commands, MCPs etc will fade out when skills is understood and picked up by everyone
Here's the Google Maps article: https://laurenleek.substack.com/p/how-google-maps-quietly-al... - note that the Hacker News title left that word out: https://news.ycombinator.com/item?id=46203343
It's possible I was subconsciously influenced by that article (I saw it linked from a few places yesterday I think), but in this case I really did want to emphasize that OpenAI have started doing this without making any announcements about it at all, which I think is noteworthy in its own right.
(I'm also quite enjoying that this may be the second time I've leaked the existence of skills from a major provider - I wrote about Anthropic's skills implementation a week before they formally announced it: https://simonwillison.net/2025/Oct/10/claude-skills/)
That being said I’m quite sure that it’s being used more frequently recently. For example, I read a shortish 2000-word article yesterday that uses the word “quietly” four times. And ChatGPT 5.1 used it in most of its responses. Also I’d expect that the frequency illusion wears off quite quickly, whereas I’ve noticed “quietly” for some time and the feeling doesn’t seem to be wearing off. Maybe you’ll start to notice it now too!