Posted by simonw 3 days ago
I have been running Claude Code with simple prompts (eg 1) to orchestrate opencode when I do large refactors. I have also tried generating orchestration scripts instead. Like, generate a list of tasks at a high level. Have a script go task by task, create a small task level prompt (use a good model) and pass on the task to agent (with cheaper model). Keeping context low and focused has many benefits. You can use cheaper models for simple, small and well-scoped tasks.
This brings me to skills. In my product, nocodo, I am building a heavier agent which will keep track of a project, past prompts, skills needed and use the right agents for the job. Agents are basically a mix of system prompt and tools. All selected on the fly. User does not even have to generate/maintain skills docs. I can get them generated and maintained with high quality models from existing code in the project or tasks at hand.
1 Example prompt I recently used: Please read GitHub issue #9. We have phases clearly marked. Analyze the work and codebase. Use opencode, which is a coding agent installed. Check `opencode --help` about how to run a prompt in non-interactive mode. Pass each phase to opencode, one phase at a time. Add extra context you think is needed to get the work done. Wait for opencode to finish, then review the work for the phase. Do not work on the files directly, use opencode
My product, nocodo: https://github.com/brainless/nocodo
Incredibly dumb question, but when they say this, what actually happens?
Is it using TeX? Is it producing output using the PDF file spec? Is there some print driver it's wired into?
Some frameworks/languages move really fast unfortunately.
I’ve been playing with doing this but kind of doesn’t feel the most natural fit.
They vary between British and American English. In this case, either would acceptable depending on your dialect.
Also very noticeable with sports teams.
American: “Team Spain is going to the final.”
British: “Team Spain are going to the final.”
https://editorsmanual.com/articles/collective-nouns-singular...
Blame it on a messy divorce a few hundred years ago :)
The traffic jam are expanding.
The forest are growing.
That's just like the OpenAI case.
They gave it back then folders with instructions and executable files iirc
Here's the prompt within Codex CLI that does that: https://github.com/openai/codex/blob/ad7b9d63c326d5c92049abd...
I extracted that into a Gist to make it easier to read: https://gist.github.com/simonw/25f2c3a9e350274bc2b76a79bc8ae...
I know they didn’t dynamically scan for new skill folders but they did have mentions of the existing folders (slides, docs, …) in the system prompt
But yeah codex will totally hold your hand and teach you Ghidra if you have a few hours to spare and the barest grasp of assembly
- Augmenting CLI with specific knowledge and processes: I love the ability to work on my files, but I can only call a smart generalist to do the work. With skills if I want, say, a design review, I can write the process, what I'm looking for, and design principles I want to highlight rather than the average of every blog post about UX. I created custom gems/projects before (with PDFs of all my notes), but I couldn't replicate that on CLIs.
- Great way to build your library of prompts and build on it: In my org everyone is experimenting with AI but it's hard to document and share good processes and tools. With this, the copywriters can work on a "tone of voice" skill, the UX writers can extend it with an "Interface microcopy" skill, and I can add both to my "design review" agent.
The Claude frontend-design skill seems pretty good too for getting better HTML+CSS: https://github.com/anthropics/skills/blob/main/skills/fronte...
Say I have a CMS (I use a thin layer of Vercel AI SDK) and I want to let users interact with it via chat: tag a blog, add an entry, etc, should they be organized into discrete skill units like that? And how do we go about adding progressive discovery?