Top
Best
New

Posted by sshh12 11/2/2025

How I use every Claude Code feature(blog.sshh.io)
534 points | 188 commentspage 3
idk1 11/7/2025|
If the author is reading this, then I feel like it would be great if you could potentially open source your skills and Claude MD in a redacted way so these are actually viewable. The problem with a lot of these tutorials is, if I can call it a tutorial, is that we never get to see the actual skills or the Claude MD themselves, just a description of it.
maddmann 11/2/2025||
I really enjoyed reading this. One thought I had on the issue of paths in Claude.md

My concern with hardcoding paths inside a doc, it will likely become outdated as the codebase evolves.

One solution would be to script it and have it run pre commit to regenerate the Claude.md with the new paths.

There probably is potential for even more dev tooling that 1. Ensure reference paths are always correct, 2. Enforces standard for how references are documented in Claude.md (and lints things like length)

Perhaps using some kind of inline documentation standard like jsdoc if it’s a ts file or a naming convention if it’s an Md file

Example:

// @claude.md // For complex … usage or if you encounter a FooBarError, see ${path} for advanced troubleshooting steps

sshh12 11/2/2025|
We have a linter that checks for this to help mitigate
maddmann 11/2/2025||
You lint the file paths inside Claude.md?
sshh12 11/5/2025||
All markdown files, yeah
ed_mercer 11/2/2025||
I don't understand how people use the `git worktree` workflow. I get that you want to isolate your work, but how do you deal with dev servers, port conflicts and npm installs? When I tried it, it was way more hassle than it was worth.
maddmann 11/2/2025||
Yeah it is a mystery to me how folks could also maintain context in more than two sessions. The code review would be brutal.

You’ll also end up dealing with merge conflicts if you haven’t carefully split the work or modularized the code.

larusso 11/2/2025|||
I generally like to use it. But I one project in the org which simply can’t work because the internal built system expects a normal .git directory at the root. Means I have to rewrite some of the build code that isn’t aware of this git feature. And yes we use a library to read from git but not the git cli or a more recent compatible one that understands that the current work tree is not the main one.
danmaz74 11/2/2025|||
I gave that a try, then I decided to use devcontainers instead, and I find that better, for the reasons you mentioned.
fabbbbb 11/2/2025|||
Agree, depending on the repo and changes it’s hard with local dev servers. It sometimes works well if you don’t need local dockers and want to outsource git workflow to CC as well. Then it can do on that branch whatever it wants and main work is in another worktree with more steering and or docker env.
ryandetzel 11/2/2025||
I have a bash script that creates the worktree, copies env over and changes the ports of containers and the services. I then can proxy the "real" port to any worktree, it's common I'll have 3 worktrees active to switch back and forth
mortsnort 11/4/2025||
I've been having good results with Roo Code/Cline + Claude Code as the provider. I feel like using CC CLI effectively requires manually recreating Roo/Cline
layercakex 11/2/2025||
Are "agents" really as useful as the hype makes them out to be?

Most of the time I'm just pasting code blocks directly into raycast and once I've fixed the bug or got the properly transformed code in the shape that I aimed for, then I paste it back into neovim. Next i'm going to try out "opencode"[0], because I've heard some good things about it. For now, I'm happy with my current workflow.

[0] https://github.com/NickvanDyke/opencode.nvim

HDThoreaun 11/4/2025||
Yes, agents are really as useful as the hype makes them out to be. Certainly they’re more useful on bigger code bases but even small ones the agent provides more value than you may expect. I have a small side project probably around 10k loc and the ai agents still plow through tokens. End of the day letting the ai read your entire code base really does let it make more intelligent design decisions. Many other benefits as well. Iteration is much faster without copy paste
dboon 11/2/2025||
to be clear that repo is simply a thin neovim plugin for the truly excellent agent TUI opencode

I recommend using it directly instead of via the plugin

ramoz 11/2/2025||
Hooks are underutilized and will be critical for long-running and better agent performance. Excited to release Cupcake in the coming weeks. What I started building when i made the feature request for hooks.

- https://github.com/eqtylab/cupcake

- https://github.com/anthropics/claude-code/issues/712

dcre 11/2/2025||
"It shifts us from a “per-seat” license to “usage-based” pricing, which is a much better model for how we work. It accounts for the massive variance in developer usage (We’ve seen 1:100x differences between engineers)."

This is how we are doing it too, for the same reasons. For now, much easier to administer than trying to figure out who spends enough to get a real Claude Code seat. The other nice thing about using API keys is that you basically never hit rate limits.

cheesedoodle 11/2/2025||
What type of projects are you guys building? I bought Max and these features to try it out to build a more complex project (ROS2) and that does not seem to work out at all… HTML page, yes, embedded project not so much.
riskable 11/2/2025|
With all the LLM coding assistants, you have to get a feel for each model and which extension/interface you're using with them. Not only that, but it's also dependent on your project!

For example, if you're writing a command line tool in Python, it doesn't really matter what model you use since they're all really great at Python (LOL). However, if you're writing a complicated SPA that uses say, Vue 3 with Vite (and some fancy CSS framework) and Python w/FastAPI... You want the "smartest" model that knows about all these things at once (and regularly gets updated knowledge of the latest versions of things). For me, that means Claude Code.

I am cheap though and only pay Anthropic $20/month. This means I run out of Claude Credits every damned week (haha). To work around this problem, I used to use OpenAI's pay-per-use API with gpt5-mini with VS Code's Copilot extension, switching to GPT5-codex (medium) with the Codex extension for more complicated tasks.

Now that I've got more experience, I've figured out that GPT5-codex costs way too much (in API credits) for what you get in nearly all situations. Seriously: Why TF does it use that much "usage". Anyway...

I've tried them all with my very, very complicated collaborative editor (CRDTs), specifically to learn how to better use AI coding assistants. So here's what I do now:

    * Ollama cloud for gpt-oss:120b (it's so fast!)
    * Claude Code for everything else. 
I cannot understate how impressed I am with gpt-oss:120b... It's like 10x faster than gpt5-mini and yet seems to perform just as well. Maybe better, actually because it forces you to narrow your prompts (due to smaller context window). But because it's just so damned fast, that doesn't matter.

With Claude Code, it's like magic: You give it a really f'ing complicated thing to troubleshoot or implement and it just goes—and keeps going until it finishes or you run out of tokens! It's a, "the future is now!" experience for sure.

With gpt-oss:120b it's more like having an actual conversation, where the only time you stop typing is when you're reviewing what it did (which you have to do for all the models... Some more than others).

FYI: The worst is Gemini 2.5. I wouldn't even bother! It's such trash, I can't even fathom how Google is trying to pass it off as anything more than a toy. When it decides to actually run (as opposed to responding with, "Failed... Try again"), it'll either hallucinate things that have absolutely nothing to do with your prompt or it'll behave like some petulant middle school kid that pretend to spend a lot of time thinking about something but ultimately does nothing at all.

distances 11/2/2025||
You do know that you can run Codex with the $20 ChatGPT subscription right? So the token waste doesn't matter that much. It's still slow though.
riskable 11/2/2025||
Tried that. I hit the limit way too fast. Faster than Claude Code, even!

GPT5-codex (medium) is such a token hog for some reason

synergy20 11/3/2025||
Just read it and the author say "do not use NEVER" in claude.md then in his sample claude.md there is NEVER the word, am I missing something?
comradesmith 11/3/2025|
They said don’t __just__ use never, and explain that as meaning you should give an alternative with every “never” clause, which is what their example does :)
zkmon 11/2/2025|
Just my curiosity: Why are you producing so much code? Is it because it is now possible to do so with AI, or because you have a genuine need (solid business usecase) that requires a lot of code?
TechSquidTV 11/2/2025||
I just started developing self-hosted services largely with AI.

It wasn't possible before for me to do any of this at this kind of scale. Before, getting stuck on a bug could mean hours, days, or maybe even weeks of debugging. I never made the kind of progress I wanted before.

Many of the things I want, do already exist, but are often older, not as efficient or flexible as they could be, or just plain _look_ dated.

But now I can pump out react/shadcn frontends easily, generate apis, and get going relatively quickly. It's still not pure magic. I'm still hitting issues and such, but they are not these demotivating, project-ending, roadblocks anymore.

I can now move at a speed that matches the ideas I have.

I am giving up something to achieve that, by allowing AI to take control so much, but it's a trade that seems worth it.

sshh12 11/2/2025|||
Often code in SaaS companies like ours is indeed how we solve customer problems. It's not so much the amount of code but the rate (code per time) we can effectively use to solve problems/build solutions. AI, when tuned correctly, lets us do this faster than ever possible before.
risyachka 11/2/2025||
>> Why are you producing so much code?

This is basically a "thinking tax".

If you don't want to think and offload it to llm they burn through a lot of tokens to implement in a non-efficient way something you could often do in 10 lines if you though about it for a few minutes.

brabel 11/2/2025|||
I’ve just implemented a proof of concept that involved an API, a MCP server, an Authorization Server, a React frontend, token validation and proof of possession on the client, a CIBA flow for authentication… took a week , and I don’t even know the technologies used very well, it was all TypeScript but I work on JVM languages normally. This was a one off for a customer and I was able to show a fairly complex workflow end to end and what each part involves. I let the LLM write most of it but I understand every line and did have to make manual adjustments (though to be honest, I could easily explain to the LLM what I needed changed and given my experience it would eventually get there.

If you tell me I didn’t really need a LLM to be able to do all that in a week and just some thought and 10 lines of code would do, I suspect you are not really familiar with the latest developments in AI and just vastly underestimates the capabilities they have to do tricky stuff.

risyachka 11/2/2025||
>> I don’t even know the technologies used very well

Thats why it took a week with llm. And for you it makes sense as this is new tech.

But if someone knows those technologies - it would still take a week with llm and like 2 days without.

Jnr 11/2/2025|||
In a large project with decent code structure there can be quite a bit of boilerplate, convention, testing required. Also we are not talking about a 10-line change. More like 10k line feature.

Before LLMs we simply wouldn't implement many of those features since they were not exactly critical and required a lot of time, but now when the required development time is cut signifficantly, they suddenly make sense to implement.

More comments...