Posted by simonw 12/12/2025
You can absolutely have a skill that tells the coding agent how to use Python with your preferred virtual environment mechanism.
I ended up solving that in a slightly different way - I have a Claude hook that spits attempts to run "python" or "python3" and returns an error saying "use uv run instead".
I hope such things will be standardized across vendors. Now that they founded the Agentic AI Foundation (AAIF) and also contributed AGENTS.md, I would hope that skills become a logical extension of that.
https://www.linuxfoundation.org/press/linux-foundation-annou...
https://github.blog/changelog/2025-12-18-github-copilot-now-...
They gave it back then folders with instructions and executable files iirc
Here's the prompt within Codex CLI that does that: https://github.com/openai/codex/blob/ad7b9d63c326d5c92049abd...
I extracted that into a Gist to make it easier to read: https://gist.github.com/simonw/25f2c3a9e350274bc2b76a79bc8ae...
I know they didn’t dynamically scan for new skill folders but they did have mentions of the existing folders (slides, docs, …) in the system prompt
I also have an open issue since months, which someone wrote a PR for (thanks") a few weeks ago.
Are you still comitted to that project?
Honestly the main problem has been that LLM's unique selling point back in 2024 was that it was the only tool taking CLI access to LLMs seriously. In 2025 Claude Code and Codex CLI etc all came along and suddenly there's not much unique about having a CLI tool for LLMs any more!
There's also a major redesign needed to the database storage and model abstraction layer in order to handle reasoning traces and more complex tool call patterns. I opened an issue about that here - it's something I'm stewing on but will take quite some work to get right: https://github.com/simonw/llm/issues/1314
I've been spending more of my time focusing on other projects which make use of LLM, in particular Datasette plugins that use the asyncio Python library: https://llm.datasette.io/en/stable/python-api.html#async-mod...
I expect those to drive some core improvements pretty soon.
Has anyone tested how well this works with code generation in Codex CLI specifically? The latency on skill registration could matter in a typical dev workflow.