Top
Best
New

Posted by sshh12 2 days ago

How I use every Claude Code feature(blog.sshh.io)
520 points | 185 commentspage 3
ramoz 2 days ago|
Hooks are underutilized and will be critical for long-running and better agent performance. Excited to release Cupcake in the coming weeks. What I started building when i made the feature request for hooks.

- https://github.com/eqtylab/cupcake

- https://github.com/anthropics/claude-code/issues/712

ed_mercer 2 days ago||
I don't understand how people use the `git worktree` workflow. I get that you want to isolate your work, but how do you deal with dev servers, port conflicts and npm installs? When I tried it, it was way more hassle than it was worth.
maddmann 2 days ago||
Yeah it is a mystery to me how folks could also maintain context in more than two sessions. The code review would be brutal.

You’ll also end up dealing with merge conflicts if you haven’t carefully split the work or modularized the code.

larusso 2 days ago|||
I generally like to use it. But I one project in the org which simply can’t work because the internal built system expects a normal .git directory at the root. Means I have to rewrite some of the build code that isn’t aware of this git feature. And yes we use a library to read from git but not the git cli or a more recent compatible one that understands that the current work tree is not the main one.
fabbbbb 2 days ago|||
Agree, depending on the repo and changes it’s hard with local dev servers. It sometimes works well if you don’t need local dockers and want to outsource git workflow to CC as well. Then it can do on that branch whatever it wants and main work is in another worktree with more steering and or docker env.
ryandetzel 2 days ago|||
I have a bash script that creates the worktree, copies env over and changes the ports of containers and the services. I then can proxy the "real" port to any worktree, it's common I'll have 3 worktrees active to switch back and forth
danmaz74 2 days ago||
I gave that a try, then I decided to use devcontainers instead, and I find that better, for the reasons you mentioned.
sublinear 2 days ago||
I feel like these posts are interesting, but become irrelevant quickly. Does anyone actually follow these as guides, or just consume them as feedback for how we wish we could interface with LLMs and the workarounds we currently use?

Right now these are reading like a guide to prolog in the 1980s.

campbel 2 days ago||
Given that this space is so rapidly evolving, these kinds of posts are helpful just to make sure you aren't missing anything big. I've caught myself doing something the hard way after reading one of these. In this case, the framing is basically man pages for CLIs was a helpful description of sills that gives me some ideas about how to improve interaction with an in-house CLI my co. uses.
sshh12 2 days ago||
Yeah I like to think not everyone can spend their day exploring/tinkering with all these features so it's handy to just snapshot what exists and what works/doesn't.
epiccoleman 2 days ago|||
I wouldn't say I follow them as guides, but I think the field is changing quickly enough that it's good, or at least interesting, to read what's working well for other people.
adastra22 2 days ago|||
This one is already out of date. The bit on the top about allocating space in CLAUDE.md for each tool is largely a waste of tokens these days. Use the skills feature.
sshh12 2 days ago||
It's a balance and we use both.

Skills doesn't totally deprecate documenting things in CLAUDE.md but agree that a lot of these can be defined as skills instead.

Skill frontmatter also still sits in the global context so it's not really a token optimization either.

adastra22 2 days ago||
The skill lets you compress the amount loaded to just the briefest description, with the “where do I go to get more info” being implicit. You should use a SKILL.md for evry added tool. At which point, putting instructions in CLAIDE.md becomes redundant and confusing to the LLM.
mewpmewp2 2 days ago||
I wouldn't use as a guide necessarily, but I would use as a way to sync my own findings and see if I have missed something important.
FooBarWidget 2 days ago||
Does anyone have any suggestions on making Claude prefer to use project internal abstractions and utility functions? My C++ project has a lot of them. If I just say something like "for I/O and networking code, check IOUtils.h for helpers" then it often doesn't do that. But mentioning all helper functions and classes in the context also seems like a bad idea. What's the best way? Are the new Skills a solution?
chickensong 1 day ago||
Skills seem like the way forward, but Claude still needs to be convinced to activate the skill. If that's not happening reliably, hooks should be able to help.

A sibling comment on hooks mentions some approaches. You could also try leveraging the UserPromptSubmit hook to do some prompt analysis and force relevant skill activation.

cannonpalms 2 days ago|||
I wonder how well a sentence or two in CLAUDE.md, saying to search the local project for examples of similar use cases or use of internal libraries, would work.
sshh12 2 days ago||
Hooks can also be useful for this. If it's using the wrong APIs then can hint on write or block on commit with some lint function that checks for this.
synergy20 1 day ago||
Just read it and the author say "do not use NEVER" in claude.md then in his sample claude.md there is NEVER the word, am I missing something?
comradesmith 1 day ago|
They said don’t __just__ use never, and explain that as meaning you should give an alternative with every “never” clause, which is what their example does :)
cheesedoodle 2 days ago||
What type of projects are you guys building? I bought Max and these features to try it out to build a more complex project (ROS2) and that does not seem to work out at all… HTML page, yes, embedded project not so much.
riskable 2 days ago|
With all the LLM coding assistants, you have to get a feel for each model and which extension/interface you're using with them. Not only that, but it's also dependent on your project!

For example, if you're writing a command line tool in Python, it doesn't really matter what model you use since they're all really great at Python (LOL). However, if you're writing a complicated SPA that uses say, Vue 3 with Vite (and some fancy CSS framework) and Python w/FastAPI... You want the "smartest" model that knows about all these things at once (and regularly gets updated knowledge of the latest versions of things). For me, that means Claude Code.

I am cheap though and only pay Anthropic $20/month. This means I run out of Claude Credits every damned week (haha). To work around this problem, I used to use OpenAI's pay-per-use API with gpt5-mini with VS Code's Copilot extension, switching to GPT5-codex (medium) with the Codex extension for more complicated tasks.

Now that I've got more experience, I've figured out that GPT5-codex costs way too much (in API credits) for what you get in nearly all situations. Seriously: Why TF does it use that much "usage". Anyway...

I've tried them all with my very, very complicated collaborative editor (CRDTs), specifically to learn how to better use AI coding assistants. So here's what I do now:

    * Ollama cloud for gpt-oss:120b (it's so fast!)
    * Claude Code for everything else. 
I cannot understate how impressed I am with gpt-oss:120b... It's like 10x faster than gpt5-mini and yet seems to perform just as well. Maybe better, actually because it forces you to narrow your prompts (due to smaller context window). But because it's just so damned fast, that doesn't matter.

With Claude Code, it's like magic: You give it a really f'ing complicated thing to troubleshoot or implement and it just goes—and keeps going until it finishes or you run out of tokens! It's a, "the future is now!" experience for sure.

With gpt-oss:120b it's more like having an actual conversation, where the only time you stop typing is when you're reviewing what it did (which you have to do for all the models... Some more than others).

FYI: The worst is Gemini 2.5. I wouldn't even bother! It's such trash, I can't even fathom how Google is trying to pass it off as anything more than a toy. When it decides to actually run (as opposed to responding with, "Failed... Try again"), it'll either hallucinate things that have absolutely nothing to do with your prompt or it'll behave like some petulant middle school kid that pretend to spend a lot of time thinking about something but ultimately does nothing at all.

distances 1 day ago||
You do know that you can run Codex with the $20 ChatGPT subscription right? So the token waste doesn't matter that much. It's still slow though.
riskable 1 day ago||
Tried that. I hit the limit way too fast. Faster than Claude Code, even!

GPT5-codex (medium) is such a token hog for some reason

sylware 2 days ago||
Are there any of those CLI clients (coded in plain and simple C, or basic python/perl without 1 billion of expensive dependencies) able to access those 'coding AI' prompt anonymously then rate limited?

If no anonymous access is provided, is there a way to create an account with a noscript/basic (x)html/classic web browsers in order to get an API key secret?

Because I do not use web engines from the "whatng" cartel.

To add insult to injury, my email is self-hosted with IP literals to avoid funding the DNS people which are mostly now in strong partnership with the "whatng" cartel (email with IP literals are "stronger" than SPF since it does the same and more). An email is often required for account registration.

zkmon 2 days ago||
Just my curiosity: Why are you producing so much code? Is it because it is now possible to do so with AI, or because you have a genuine need (solid business usecase) that requires a lot of code?
TechSquidTV 2 days ago||
I just started developing self-hosted services largely with AI.

It wasn't possible before for me to do any of this at this kind of scale. Before, getting stuck on a bug could mean hours, days, or maybe even weeks of debugging. I never made the kind of progress I wanted before.

Many of the things I want, do already exist, but are often older, not as efficient or flexible as they could be, or just plain _look_ dated.

But now I can pump out react/shadcn frontends easily, generate apis, and get going relatively quickly. It's still not pure magic. I'm still hitting issues and such, but they are not these demotivating, project-ending, roadblocks anymore.

I can now move at a speed that matches the ideas I have.

I am giving up something to achieve that, by allowing AI to take control so much, but it's a trade that seems worth it.

sshh12 2 days ago|||
Often code in SaaS companies like ours is indeed how we solve customer problems. It's not so much the amount of code but the rate (code per time) we can effectively use to solve problems/build solutions. AI, when tuned correctly, lets us do this faster than ever possible before.
risyachka 2 days ago||
>> Why are you producing so much code?

This is basically a "thinking tax".

If you don't want to think and offload it to llm they burn through a lot of tokens to implement in a non-efficient way something you could often do in 10 lines if you though about it for a few minutes.

brabel 2 days ago|||
I’ve just implemented a proof of concept that involved an API, a MCP server, an Authorization Server, a React frontend, token validation and proof of possession on the client, a CIBA flow for authentication… took a week , and I don’t even know the technologies used very well, it was all TypeScript but I work on JVM languages normally. This was a one off for a customer and I was able to show a fairly complex workflow end to end and what each part involves. I let the LLM write most of it but I understand every line and did have to make manual adjustments (though to be honest, I could easily explain to the LLM what I needed changed and given my experience it would eventually get there.

If you tell me I didn’t really need a LLM to be able to do all that in a week and just some thought and 10 lines of code would do, I suspect you are not really familiar with the latest developments in AI and just vastly underestimates the capabilities they have to do tricky stuff.

risyachka 2 days ago||
>> I don’t even know the technologies used very well

Thats why it took a week with llm. And for you it makes sense as this is new tech.

But if someone knows those technologies - it would still take a week with llm and like 2 days without.

Jnr 2 days ago|||
In a large project with decent code structure there can be quite a bit of boilerplate, convention, testing required. Also we are not talking about a 10-line change. More like 10k line feature.

Before LLMs we simply wouldn't implement many of those features since they were not exactly critical and required a lot of time, but now when the required development time is cut signifficantly, they suddenly make sense to implement.

hendry 2 days ago||
"All my stateless tools (like Jira, AWS, GitHub) have been migrated to simple CLIs." - How do you get Jira on the CLI?
rererereferred 2 days ago||
There's an Atlasian cli with Jira support https://developer.atlassian.com/cloud/acli/reference/command...
greymalik 2 days ago||
Cloud only. My employer is still on an ancient data center version. But you can easily write a cli that wraps the REST API.
RMPR 2 days ago|||
Jiratui[0] has some support for basic automation. That's probably what OP is using as it is the most poppular Jira cli tool out there.

0: https://github.com/whyisdifficult/jiratui

PhilippGille 2 days ago|||
First search result (on Kagi): https://github.com/ankitpokhrel/jira-cli

Latest version from 2 momths ago, >4700 stars on GitHub

mewpmewp2 2 days ago||
At some point I vibecoded myself everything into cli commands, anything that has API could be a cli command.
vladsh 1 day ago|
Or use this solution and get fine-tuned context generated for each and every task just-in-time : devly.ai

Please stop expecting every engineer on the team to be an ai engineer just to get started with coding agents

More comments...