Top
Best
New

Posted by sshh12 2 days ago

How I use every Claude Code feature(blog.sshh.io)
517 points | 183 commentspage 2
mkagenius 2 days ago|
> The Takeaway: Skills are the right abstraction. They formalize the “scripting”-based agent model, which is more robust and flexible than the rigid, API-like model that MCP represents.

Just to not confuse, MCP is like an api but the underlying api can execute an Skill. So, its not MCP vs Skill as a contest. It's just the broad concept of a "flexible" skill vs "parameter" based Api. And again parameter based APIs can also be flexible depending on how we write it except that it lacks SKILL.md in case of Skills which guides llm to be more generic than a pure API.

By the way, if you are a Mac user, you can execute Skills locally via OpenSkills[1] that I have created using apple contianers.

1. OpenSkills -https://github.com/BandarLabs/open-skills

thoughtsyntax 2 days ago||
Crazy how fast Claude Code is evolving, every week there’s something new to learn, and it just keeps getting better.
prodigycorp 2 days ago|
Nothing crazy about it, judging by how much CPU and memory it uses. Now, if it managed to grow features without bringing my M4 Mac with 64GB of ram to a crawl... that's be magic.
ed_mercer 2 days ago|||
My m1 macbook pro works fine with +10 claude code sessions open at the same time (iTerm2). Are you using a terminal with a memory leak perhaps?
prodigycorp 1 day ago|||
I'm using iterm2. Memory leak and cpu problems are very well documented in the github issues.
mewpmewp2 2 days ago||||
I have home server (cost around $150) with 16 GB RAM also running Claude Code fine.
swah 2 days ago|||
How are you managing 10 parallel agents??
reachableceo 2 days ago|||
I use Windows Terminal. Rename tab.

My current project I have a top level chat , then one chat in each of the four component sub directories.

I have a second terminal with QA-feature

So 10 tabs total . Plus I have one to run occasional commands real quick (like docker ps).

I’m using qwen.

yyhhsj0521 2 days ago||
That's a lot of cognitive load to manage especially with how fast CC has become, do you review the output at all?
blueside 1 day ago||
The users review the output
ed_mercer 1 day ago|||
sorry I'm not actively working on 10 at the time, but they are in memory and kept open for when I continue working on them. I'm only actively using 2 or 3 at the same time.
sunaookami 1 day ago||||
Huh, Claude Code barely uses any system ressources. Are you sure it's Claude Code and not some Electron app that hasn't been updated for Tahoe?
caymanjim 1 day ago||||
Claude doesn't do much of anything on the local machine. I run it on a Macbook Air and a piddly 2vCPU 4GB VPS. Works fine.
cannonpalms 1 day ago|||
CC uses very little system resources.
netcraft 2 days ago||
I use claude code every day, and havent had a chance to dig super deep into skills, but even though ive read a lot of people describe them and say they're the best thing so far, I still dont get them. Theyre things the agent chooses to call right? They have different permissions? is it a tool call with different permissions and more context? I have yet to see a single post give an actual real-world concrete example of how theyre supposed to be used or a compare and contrast with other approaches.
michaelbuckbee 2 days ago||
The prerequisite thought here is that you're using CC to invoke CLI tools.

So now you need to get CC to understand _how_ to do that for various tools in a way that's context efficient, because otherwise you're relying on either potentially outdated knowledge that Claude has built in (leading to errors b/c CC doesn't know about recent versions) or chucking the entirety of a man page into your default context (inefficent).

What the Skill files do is then separate the when from the how.

Consider the git cli.

The skill file has a couple of sentences on when to use the git cli and then a much longer section on how it's supposed to be used, and the "how" section isn't loaded until you actually need it.

I've got skills for stuff like invoking the native screenshot CLI tool on the Mac, for calling a custom shell script that uses the github API to download and pull in screenshots from issues (b/c the cli doesn't know how to do this), for accessing separate APIs for data, etc.

WA 2 days ago|||
After CC used that skill and it is now in the context, how do you get rid of it later when you don’t need the skill anymore and don’t want to have your context stuffed with useless skill descriptions?
michaelbuckbee 2 days ago||
You'd need to do the "/clear" or other context manipulations.
majormajor 1 day ago|||
What I find works best for complex things is having one session generate the plan and then dispatching new sessions for each step to prevent context-rot. Not "parallel agents" but "sequential agents."
sshh12 2 days ago|||
Maybe these might be handy: - https://github.com/anthropics/skills - https://www.anthropic.com/engineering/equipping-agents-for-t...

I think if it literally as a collection of .md files and scripts to help perform some set of actions. I'm excited for it not really as a "new thing" (as mentioned in the post) but as effectively an endorsement for this pattern of agent-data interaction.

netcraft 2 days ago||
apparently I missed Simon Willison's article, this at least somewhat explains them: https://simonwillison.net/2025/Oct/16/claude-skills/

So if youre building your own agent, this would be a directory of markdown documents with headers that you tell the agent to scan so that its aware of them, and then if it thinks they could be useful it can choose to read all the instructions into its context? Is it any more than that?

I guess I dont understand how this isnt just RAG with an index you make the agent aware of?

brabel 2 days ago|||
It also looks a lot like a tool that has a description mentioning it has a more detailed MD file the LLM can read for instructions on complex workflows, doesn’t it? MCP has the concept of resources for this sort of thing. I don’t see any difference between calling a tool and calling a CLI otherwise.
nostrebored 2 days ago|||
I mean it is technically RAG as the LLM is deciding to retrieve a document. But it’s very constrained.

The skills that I use all direct a next action and how to do it. Most of them instruct to use Tasks to isolate context. Some of them provide abstraction specific context (when working with framework code, find all consumers before making changes. add integration tests for the desired state if it’s missing, then run tests to see…) and others just inject only the correct company specific approach to solving only this problem into Task context.

They are composable and you can build the logic table of when an instance is “skilled” enough. I found them worse than hooks with subagents when I started, but now I see them as the coolest thing in Claude code.

The last benefit is nobody on your team even had to know they exist. You can just have them as part of onboarding and everyone can take advantage of what you’ve learned even when working on greenfield projects that don’t have a CLAUDE.md.

johnfn 1 day ago||
Does anyone else struggle with getting Claude to follow even the simplest commands in CLAUDE.md? I’ve completely give up on maintaining it because there’s a coin flip chance it disregards even the simplest instructions. For instance, (after becoming increasingly exasperated and whittling it down over and over), my CLAUDE file now has a single instruction, which says that any ai generated one-off script should be named <foo>.aigen.ts, and I can’t get Claude to follow something as simple as that! It does sometimes, but half the time it disregards the instructions.

I use Claude all the time, and this is probably my biggest issue with it. I just workaround by manually supplying context in prompts, but it’s kind of annoying to do so.

Does anyone else struggle with this or am I just doing something horribly wrong?

brulard 1 day ago||
I have quite long CLAUDE.md file and claude code follows it almost all the time. When it was ignoring some instructions, i told it to update CLAUDE.md to make sure it does. It emphasized it with uppercase and IMPORTANT! ALWAYS DO ... and that make it work for me like 95% of the time. And I do this for multiple project with different CLAUDE.mds with similar results. I don't know why your experience differs so much.
quinnjh 1 day ago|||
Definitely experience this. After losing over 2 hours fiddling between a few sessions and still having no actual control I’ve settled on a handful of python and perl/regex scripts to make sure things are following my own conventions
mhrmsn 1 day ago|||
Same experience, although I gave up already a while ago so not sure if CLAUDE.md is better followed in newer versions.
handfuloflight 1 day ago||
It's a known issue among us Claudemasters.
MangoToupe 2 days ago||
Blog posts like this would really benefit from specific examples. While I can get some mileage out of these tools for greenfield projects, I'm actually shocked that this has proven useful with projects of any substantial size or complexity. I'm very curious to understand the context where such tools are paying off.
rglover 2 days ago||
It seems to be relative to skill level. If you're less-experienced, you're letting these things write most if not all of your code. If you're more experienced, that's inverted (you write most of the code and let the AI safely pepper things in).
Shocka1 1 hour ago|||
Had a newer employee rewriting some functions with LLMs, but not really admitting to it. I don't really care about the LLM aspect and think it can be quite useful with learning, but I would like to see newer people learning the system before unleashing LLMs entirely on it if that makes any sense. The same developer had a pretty hard time getting an if statement correct that simply checked the length of a string. Lots of mentoring ahead...

But I dunno. I kind of wonder how I would have acted with tech like this available when I first started years ago. As a young engineer and even now I live and breathe the code, obsessing over testing and all the thing that make software engineering so fun. But I can't say whether or not I would have overdepended on an LLM as a young engineer.

It's all kind of depressing IMO, but it is what it is at this point.

riskable 1 day ago|||
Don't rule out laziness! I'm a very experienced senior dev (full stack, embedded, Rust, Python, web everything, etc)... Could I have spent a ton of time learning the ins and outs of Yjs (and the very special way in which you can integrate it with TipRap/prosemirror) in order to implement a concise, collaborative editor? Sure.

Or I could just tell Claude Code to do it and then spend some time cleaning it up afterwards. I had that thing working quite robustly in days! D A Y S!

(Then I had the bright idea of implementing a "track changes" mode which I'm still working on like a week and a half later, haha)

Even if you were already familiar with all that stuff, it's a lot of code to write to make it work! The stylesheets alone... Ugh! So glad I could tell the AI something like, "make sure it implements light and dark mode using VueUse's `useDark()` feature."

Almost all of my "cleanup" work was just telling it about CSS classes it missed when adding dark mode variants. In fact, most of my prompts are asking it to add features (why not?) or cleaning up the code (e.g. divide things into smaller, more concise files—all the LLMs really love to make big .vue files).

"Writing most of the code"? No. Telling it how to write the code with a robust architecture, using knowledge developed over two decades of coding experience: Yes.

I have to reject some things because they'd introduce security vulnerabilities but for the most part I'm satisfied with Claude Code spits out. GPT5, on the other hand... Everything needs careful inspection.

sshh12 2 days ago|||
Makes sense. I work for a growth stage startup and most of these apply to our internal mono repo so hard to share specifics. We use this for both new and legacy code each with their own unique AI coding challenges.

If theres enough interest, I might replicate some examples in an open source project.

risyachka 2 days ago||
Whats interesting to see is not the project setup but the resulted generated code in a mid-sized project.

To see if it is easy to digest, no repeated code etc or is it just slop that should be consumed by another agent and never by human.

riskable 1 day ago||
I find the "slop" thing interesting because—to me—it looks like laziness. In the same way that anyone can tell ChatGPT to write something for them instead of writing it themselves and just having it check the work... Or going through multiple revisions before you're satisfied (with what it wrote).

Code is no different! You can tell an AI model to write something for you and that's fine! Except you have to review it! If the code is bad quality just take a moment to tell the AI to fix it!

Like, how hard is it to tell the AI that the code it just wrote is too terse and hard to read? Come on, folks! Take that extra moment! I mean, I'm pretty lazy when working on my hobby projects but even I'm going to get irritated if the code is a gigantic mess.

Just tell it, "this code is a gigantic mess. Refactor it into concise, human-readable files using a logical structure and make sure to add plenty of explanatory comments where anything might be non-obvious. Make sure that the code passes all the tests when you're done."

chickensong 1 day ago||
It's always laziness. The people that do the bare minimum will likely continue to do so, regardless of AI.

I think we'll be dealing with slop issues for quite some time, but I also have hopes that AI will raise the bar of code in general.

petesergeant 2 days ago||
> Generally my goal is to “shoot and forget”—to delegate, set the context, and let it work. Judging the tool by the final PR and not how it gets there.

This feels like a false economy to me for real sized changes, but maybe I’m just a weak code reviewer. For code I really don’t care about, I’m happy to do this, but if I ever need to understand that code I have an uphill battle. OTOH reading intermediate diffs and treating the process like actual pair programming has worked well for me, left me with changes I’m happy with, and codebases I understand well enough to debug.

jaggederest 2 days ago||
I treat everything I find in code review as something to integrate into the prompts. Eventually, on a given project, you end up getting correct PRs without manual intervention. That's what they mean. You still have to review your code of course!
krackers 1 day ago|||
No, I think it is normal. If it were easy to gain a mental model of the code simply by reading, then debugging would be trivial. The whole point of debugging is that there are differences between your mental model of the code and what the code is actually doing, that sometimes can't be uncovered unless you step through it line by line even if you're the one who wrote it.

It is why I am a bit puzzled by the people who use an LLM to generate code in anything other than a "tightly scoped" fashion (boilerplate, throwaway code, standalone script, single file, or at the function level). I'm not sure how that makes your job later on any easier if you have even a worse mental model of the code because you didn't even write it. And debugging is almost usually more tedious than writing code, so you've traded off the fun/easy part for a more difficult one. Seems like a faustian deal.

sshh12 2 days ago||
I've found planning to be key here for scaling to arbitrary complex changes.

It's much easier to review larger changes when you've aligned on a Claude generated plan up front.

juanre 2 days ago||
Skills are also a convenient way for writing self-documenting packages. They solve the problem of teaching the LLM how to use a library.

I have started experimenting with a skills/ directory in my open source software, and then made a plugin marketplace that just pulls them in. It works well, but I don't know how scalable it will be.

https://github.com/juanre/ai-tools

maddmann 2 days ago||
I really enjoyed reading this. One thought I had on the issue of paths in Claude.md

My concern with hardcoding paths inside a doc, it will likely become outdated as the codebase evolves.

One solution would be to script it and have it run pre commit to regenerate the Claude.md with the new paths.

There probably is potential for even more dev tooling that 1. Ensure reference paths are always correct, 2. Enforces standard for how references are documented in Claude.md (and lints things like length)

Perhaps using some kind of inline documentation standard like jsdoc if it’s a ts file or a naming convention if it’s an Md file

Example:

// @claude.md // For complex … usage or if you encounter a FooBarError, see ${path} for advanced troubleshooting steps

sshh12 1 day ago|
We have a linter that checks for this to help mitigate
maddmann 1 day ago||
You lint the file paths inside Claude.md?
layercakex 1 day ago||
Are "agents" really as useful as the hype makes them out to be?

Most of the time I'm just pasting code blocks directly into raycast and once I've fixed the bug or got the properly transformed code in the shape that I aimed for, then I paste it back into neovim. Next i'm going to try out "opencode"[0], because I've heard some good things about it. For now, I'm happy with my current workflow.

[0] https://github.com/NickvanDyke/opencode.nvim

HDThoreaun 4 hours ago||
Yes, agents are really as useful as the hype makes them out to be. Certainly they’re more useful on bigger code bases but even small ones the agent provides more value than you may expect. I have a small side project probably around 10k loc and the ai agents still plow through tokens. End of the day letting the ai read your entire code base really does let it make more intelligent design decisions. Many other benefits as well. Iteration is much faster without copy paste
dboon 1 day ago||
to be clear that repo is simply a thin neovim plugin for the truly excellent agent TUI opencode

I recommend using it directly instead of via the plugin

dcre 1 day ago|
"It shifts us from a “per-seat” license to “usage-based” pricing, which is a much better model for how we work. It accounts for the massive variance in developer usage (We’ve seen 1:100x differences between engineers)."

This is how we are doing it too, for the same reasons. For now, much easier to administer than trying to figure out who spends enough to get a real Claude Code seat. The other nice thing about using API keys is that you basically never hit rate limits.

More comments...