Top
Best
New

Posted by spencerldixon 8 hours ago

I found a useful Git one liner buried in leaked CIA developer docs(spencer.wtf)
562 points | 199 comments
fphilipe 5 hours ago|
Here's my take on the one-liner that I use via a `git tidy` alias[1]. A few points:

* It ensures the default branch is not deleted (main, master)

* It does not touch the current branch

* It does not touch the branch in a different worktree[2]

* It also works with non-merge repos by deleting the local branches that are gone on the remote

    git branch --merged "$(git config init.defaultBranch)" \
    | grep -Fv "$(git config init.defaultBranch)" \
    | grep -vF '*' \
    | grep -vF '+' \
    | xargs git branch -d \
    && git fetch \
    && git remote prune origin \
    && git branch -v \
    | grep -F '[gone]' \
    | grep -vF '*' \
    | grep -vF '+' \
    | awk '{print $1}' \
    | xargs git branch -D

[1]: https://github.com/fphilipe/dotfiles/blob/ba9187d7c895e44c35...

[2]: https://git-scm.com/docs/git-worktree

rubinlinux 3 hours ago||
The use of init.defaultBranch here is really problematic, because different repositories may use a different name for their default, and this is a global (your home directory scope) setting you have to pre-set.

I have an alias I use called git default which works like this:

  default = !git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@'
then it becomes

  ..."$(git default)"...
This figures out the actual default from the origin.
fphilipe 2 hours ago|||
I have a global setting for that. Whenever I work in a repo that deviates from that I override it locally. I have a few other aliases that rely on the default branch, such as “switch to the default branch”. So I usually notice it quite quickly when the value is off in a particular repo.
jeffrallen 2 hours ago|||
This is a great solution to a stupid problem.

I work at a company that was born and grew during the master->main transition. As a result, we have a 50/50 split of main and master.

No matter what you think about the reason for the transition, any reasonable person must admit that this was a stupid, user hostile, and needlessly complexifying change.

I am a trainer at my company. I literally teach git. And: I have no words.

Every time I decide to NOT explain to a new engineer why it's that way and say, "just learn that some are master, newer ones are main, there's no way to be sure" a little piece of me dies inside.

tonymet 2 hours ago||
This is a good one you should contribute it to git extras.
WickyNilliams 6 hours ago||
I have a cleanup command that integrates with fzf. It pre selects every merged branch, so I can just hit return to delete them all. But it gives me the opportunity to deselect to preserve any branches if I want. It also prunes any remote branches

    # remove merged branches (local and remote)
    cleanup = "!git branch -vv | grep ': gone]' | awk '{print $1}' | fzf --multi --sync --bind start:select-all | xargs git branch -D; git remote prune origin;"
https://github.com/WickyNilliams/dotfiles/blob/c4154dd9b6980...

I've got a few aliases that integrate with fzf like an interactive cherry pick (choose branch, choose 1 or more commits), or a branch selector with a preview panel showing commits to the side. Super useful

The article also mentions that master has changed to main mostly, but some places use develop and other names as their primary branch. For that reason I always use a git config variable to reference such branches. In my global git config it's main. Then I override where necessary in any repo's local config eg here's an update command that updates primary and rebases the current branch on top:

    # switch to primary branch, pull, switch back, rebase
    update = !"git switch ${1:-$(git config user.primaryBranch)}; git pull; git switch -; git rebase -;"
https://github.com/WickyNilliams/dotfiles/blob/c4154dd9b6980...
lloeki 6 hours ago||
> For that reason I always use a git config variable to reference such branches. In my global git config it's main

    $(git config user.primaryBranch)
What about using git's own `init.defaultBranch`?

I mean, while useless in terms of `git init` because the repo's already init'd, this works:

    git config --local init.defaultBranch main
And if you have `init.defaultBranch` set up already globally for `git init` then it all just works
WickyNilliams 6 hours ago||
Hmm that might be nice actually. I like not conflating those two things, but as you say if the repo is already init'd then there's no chance it'll be used for the wrong purpose.

In any case the main thrust was just to avoid embeddings assumptions about branch names in your scripts :)

MathiasPius 6 hours ago||
You can pull another branch without switching first:

  git switch my-test-branch
  ...
  git pull origin main:main
  git rebase main
hiccuphippo 5 hours ago|||
You can also rebase directly on the remote branch

    git fetch
    git rebase origin/main
WickyNilliams 6 hours ago||||
Nice. That'll make things a bit smoother. Changing branches often trips me up when I would later `git switch -`.
mroche 5 hours ago||||
Likewise with the other way around, just switch pull with push.
huntervang 5 hours ago|||
I have always done `git pull origin main -r`
lloeki 7 hours ago||
I've had essentially that - if a bit fancier to accept an optional argument as well as handle common "mainline" branch names - aliased as `git lint` for a while:

    [alias]
        lint = !git branch --merged ${1-} | grep -v -E -e '^[*]?[ ]*(main|master|[0-9]+[.]([0-9]+|x)-stable)$' -e '^[*][ ]+' | xargs -r -n 1 git branch --delete
so:

    git pull --prune && git lint
sits very high in my history stats
jakub_g 7 hours ago||
The main issue with `git branch --merged` is that if the repo enforces squash merges, it obviously won't work, because SHA of squash-merged commit in main != SHA of the original branch HEAD.

What tools are the best to do the equivalent but for squash-merged branches detections?

Note: this problem is harder than it seems to do safely, because e.g. I can have a branch `foo` locally that was squash-merged on remote, but before it happened, I might have added a few more commits locally and forgot to push. So naively deleting `foo` locally may make me lose data.

otsaloma 3 hours ago||
I recently revised my script to rely on (1) no commits in the last 30 days and (2) branch not found on origin. This is obviously not perfect, but it's good enough for me and just in case, my script prompts to confirm before deleting each branch, although most of the time I just blindly hit yes.

To avoid losing any work, I have a habit of never keeping branches local-only for long. Additionally this relies on https://docs.github.com/en/repositories/configuring-branches...

masklinn 7 hours ago|||
Not just squash merges, rebase-merges also don't work.

> What tools are the best to do the equivalent but for squash-merged branches detections?

Hooking on remote branch deletion is what most people do, under the assumption that you tend to clean out the branches of your PRs after a while. But of course if you don't do that it doesn't work.

laksdjf 5 hours ago|||
I have the same issue. Changes get pushed to gerrit and rebased on the server. This is what I have, though not perfected yet.

  prunable = "!f() { \
  : git log ; \
  target=\"$1\"; \
  [ -z \"$target\" ] && target=$(git for-each-ref --format=\"%(refname:short)\" --count=1 refs/remotes/m/); \
  if [ -z \"$target\" ]; then echo \"No remote branches found in refs/remotes/m/\"; return 1; fi; \
  echo \"# git branch --merged shows merged if same commit ID only\" ;\
  echo \"# if rebased, git cherry can show branch HEAD is merged\"  ;\
  echo \"# git log grep will check latest commit subject only.  if amended, this status won't be accurate\" ;\
  echo \"# Comparing against $target...\"; \
  echo \"# git branch --merged:\"; \
  git branch --merged $target  ;\
  echo \" ,- git cherry\" ; \
  echo \" |  ,- git log grep latest message\"; \
  for branch in $(git for-each-ref --format='%(refname:short)' refs/heads/); do \
   if git cherry \"$target\" \"$branch\" | tail -n 1 | grep -q \"^-\"; then \
    cr=""; \
   else \
    cr=""; \
   fi ; \
   c=$(git rev-parse --short $branch) ; \
   subject=$(git log -1 --format=%s \"$branch\" | sed 's/[][(){}.^$\*+?|\\/]/\\\\&/g') ; \
   if git log --grep=\"^$subject$\" --oneline \"$target\" | grep -q .; then \
    printf \"$cr  $c %-20s $subject\\n\"  $branch; \
   else \
    printf \"$cr  \\033[0;33m$c \\033[0;32m%-20s\\033[0m $subject\\n\"  $branch; \
   fi; \
  done; \
  }; f"
(some emojis missing in above. see gist) https://gist.github.com/lawm/8087252b4372759b2fe3b4052bf7e45...

It prints the results of 3 methods:

1. git branch --merged

2. git cherry

3. grep upstream git log for a commit with the same commit subject

Has some caveats, like if upstream's commit was amended or the actual code change is different, it can have a false positive, or if there are multiple commits on your local branch, only the top commit is checked

arccy 4 hours ago||
if you're using gerrit then you have the Change-Id trailer you can match against?
samhclark 6 hours ago|||
Depends on your workflow, I guess. I don't need to handle that case you noted and we delete the branch on remote after it's merged. So, it's good enough for me to delete my local branch if the upstream branch is gone. This is the alias I use for that, which I picked up from HN.

    # ~/.gitconfig
    [alias]
        gone = ! "git fetch -p && git for-each-ref --format '%(refname:short) %(upstream:track)' | awk '$2 == \"[gone]\" {print $1}' | xargs -r git branch -D"
Then you just `git gone` every once in a while, when you're between features.
WorldMaker 6 hours ago|||
This is my PowerShell variant for squash merge repos:

    function Rename-GitBranches {
        git branch --list "my-branch-prefix/*" | Out-GridView -Title "Branches to Zoo?" -OutputMode Multiple | % { git branch -m $_.Trim() "zoo/$($_.Trim())" }
    }
`Out-GridView` gives a very simple dialog box to (multi) select branch names I want to mark finished.

I'm a branch hoarder in a squash merge repo and just prepend a `zoo/` prefix. `zoo/` generally sorts to the bottom of branch lists and I can collapse it as a folder in many UIs. I have found this useful in several ways:

1) It makes `git rebase --interactive` much easier when working with stacked branches by taking advantage of `--update-refs`. Merges do all that work for you by finding their common base/ancestor. Squash merging you have to remember which commits already merged to drop from your branch. With `--update-refs` if I find it trying to update a `zoo/` branch I know I can drop/delete every commit up to that update-ref line and also delete the update-ref.

2) I sometimes do want to find code in intermediate commits that never made it into the squashed version. Maybe I tried an experiment in a commit in a branch, then deleted that experiment in switching directions in a later commit. Squashing removes all evidence of that deleted experiment, but I can still find it if I remember the `zoo/` branch name.

All this extra work for things that merge commits gives you for free/simpler just makes me dislike squash merging repos more.

de46le 5 hours ago||
Mine's this-ish (nushell, but easily bashified or pwshd) for finding all merged, including squashed:

    let t = "origin/dev"; git for-each-ref refs/heads/ --format="%(refname:short)" | lines | where {|b| $b !~ 'dev' and (git merge-tree --write-tree $t $b | lines | first) == (git rev-parse $"($t)^{tree}") }
Does a 3-way in-mem merge against (in my case) dev. If there's code in the branch that isn't in the target it won't show up.

Pipe right to deletion if brave, or to a choice-thingy if prudent :)

gritzko 6 hours ago||
If something this natural requires several lines of bash, something is just not right. Maybe branches should go sorted by default, either chronologically or topologically? git's LoC budget is 20x LevelDBs or 30% of PostgreSQL or 3 SQLites. It must be able to do these things out of the box, isn't it?

https://replicated.wiki/blog/partII.html

cloudfudge 2 hours ago|
"too many lines of bash" and "lines of code" seem like very strange metrics to use to form these types of opinions.
whazor 8 hours ago||
I currently have a TUI addiction. Each time I want something to be easier, I open claude-code and ask for a TUI. Now I have a git worktree manager where I can add/rebase/delete. As TUI library I use Textual which claude handles quite well, especially as it can test-run quite some Python code.
eulers_secret 7 hours ago||
Tig is a nice and long-maintained git tui you might enjoy, then!

If nothing else maybe for inspiration

rw_panic0_0 7 hours ago|||
how do you trust the code claude wrote? don't you get anxiety "what if there's an error in tui code and it would mess up my git repo"?
freedomben 6 hours ago|||
I'm not GP, but I have backups, plus I always make sure I've committed and pushed all code I care about to the remote. I do this even when running a prompt in an agent. That goes for running most things actually, not just CC. If claude code runs a git push -f then that could really hurt, but I have enough confidence from working with the agents that they aren't going to do that that it's worth it to me to take the risk in exchange for the convenience of using the agent.
embedding-shape 6 hours ago||||
> how do you trust the code claude wrote?

If that's something you're worried about, review the code before running it.

> don't you get anxiety "what if there's an error in tui code and it would mess up my git repo"?

I think you might want to not run untrusted programs in an environment like that, alternatively find a way of start being able to trust the program. Either approaches work, and works best depending on what you're trying to do.

kaoD 5 hours ago||
> If that's something you're worried about, review the code before running it.

It takes more, not less, time to thoroughly review code you didn't write.

embedding-shape 5 hours ago||
Depends. If I was the one coming up with the implementation anyways, it's basically just the "coding" part that was replaced with "fingers hitting keyboard" and "agents writing to disk", so reviewing the code certainly is faster, you just have to "check" it, not understand it from scratch.

If we're talking receiving random patches where first you have to understand the context, background and so on, then yeah I agree, it'll take longer time probably than what it took for someone to hammer with their fingers. But again, I'm not sure that's how professionals use LLMs right now, vibe-coding is a small hyped world mostly non-programmers seem to engage in.

kaoD 5 hours ago||
> you just have to "check" it, not understand it from scratch.

How can you "check" that which you don't "understand"?

> I'm not sure that's how professionals use LLMs right now

I'm a professional and I can tell you how I use LLMs: I write code with their assistance, they don't write code for me.

The few times I let Claude or Copilot loose, the results were heartbreaking and I spent more time reviewing (and then discarding) the code than what it took me to later write it from scratch.

embedding-shape 3 hours ago||
> How can you "check" that which you don't "understand"?

??? I do understand, since I literally just instructed it, how would I otherwise? I'm not letting the LLM do the design, it's all me still. So the "understand" already exists before the LLM even finished working.

> I'm a professional and I can tell you how I use LLMs: I write code with their assistance, they don't write code for me.

Hey, welcome to the club, me too :) I don't write code, I write English prose, yet nothing is vibe coded, and probably I'll end up being able to spend more time than you thinking about the design and architecture and how it fits in, because actual typing is no longer slowing me down. Yet again, every line is reviewed multiple times.

It's more about the person behind the tools, than the tools themselves I think ultimately. Except for Copilot probably, the times I've tried it I've just not been able to produce code that is even slightly up to my standards. It's a breeze with Codex though (5.2), and kind of hit and miss with Claude Code.

whazor 5 hours ago||||
I push my branches daily, so I wouldn't lose that much work. If it breaks then I ask it to fix it.

But I do quickly check the output what it does, and especially the commands it runs. Sometimes it throws all code in a single file, so I ask for 'good architecture with abstractions'.

zenoprax 4 hours ago||
I see this regularly: "I use GitHub to backup my local repos."

If `gh repo ...` commands get run you can lose everything instantly. You can force push and be left with a single blank commit on both sides. The agent has full control of everything, not just your local data.

Just set up Rclone/restic and get your stuff into a system with some immutability.

tux1968 3 hours ago||
Force pushing doesn't actually remove anything from the remote repository, only changes some references for which commits the branches point to. Plus, any forks on github will be completely unaffected. It's not perfect, since Github doesn't seem to offer any history of such reference alterations (a la the reflog), but it's still a valuable offsite backup from a developer's perspective.
sclangdon 7 hours ago||||
Isn't it this case no matter who wrote the code? How do you ever run anything if you're worried about bugs?
phailhaus 7 hours ago|||
When I write the code myself, I'm not worried that I snuck a `git reset --hard` somewhere.
hennell 6 hours ago||||
Different type of creator, different type of bugs. I'd assume a human giving me a way to delete merged branches has probably had the same issue, solved the same problem and understands unspecified context around the problem (e.g protect local data). They probably run it themselves so bugs are most likely to occur in edge cases around none standard use as it works for them.

Ais are giving you what they get from common patterns, parsing documentation etc. Depending what you're asking this might be an entirely novel combination of commands never run before. And depending on the model/prompt it might solve in a way any human would balk at (push main to origin, delete .git, re-clone from origin. Merged local branches are gone!)

It's like the ai art issues - people struggle with relative proportions and tones and making it look real. Ai has no issues with tones, but will add extra fingers or arms etc that humans rarely struggle with. You have to look for different things, and Ai bugs are definitely more dangerous than (most) human bugs.

(Depends a little, it's pretty easy to tell if a human knows what they're talking about. There's for sure humans who could write super destructive code, but other elements usually make you suspicious and worried about the code before that)

layer8 4 hours ago|||
It makes a difference whether an AI or a human wrote it. AIs make more random, inconsistent errors or omissions that a human wouldn’t make. AIs also don’t dog-feed their code the way human developers of tools usually do, catching more errors or unfit/missing logic that way.
ithkuil 6 hours ago||||
I assume that whatever I type can be also flawed and take precautions like backups etc
fragmede 1 hour ago|||
It's a git repo. What's sort of mess-ups are you worried about that you can't reflog your way out of (or ask claude code to fix)? It's certainly possible to lose uncommitted work, but once it's been committed, unless claude code goes and deletes .git entirely (which I've had codex do, so you'd better push it somewhere), you can't lose work.
firesteelrain 7 hours ago|||
Can you explain TUI? I have never heard this before
Bjartr 7 hours ago|||
Terminal User Interface, contrasting with a Graphical User Interface (GUI). Most often applied to programs that use the terminal as a pseudo-graphical canvas that they draw on with characters to provide an interactive page that can be navigated around with the keyboard.

Really, they're just a GUI drawn with Unicode instead of drawing primitives.

Like many restrictions, limiting oneself to just a fixed grid of colored Unicode characters for drawing lends itself to more creative solutions to problems. Some people prefer such UIs, some people don't.

Muvasa 7 hours ago|||
I prefer tui's for two reasons. 1. Very used to vi keybindings 2. I like low resources software. I love the ability to open the software in less than a second in a second do what I needed using vi motions. And close it less than a second.

Some people will be like you save two seconds trying to do something simple. You lose more time building the tool than you will use it in your life.

It's not about saving time. It's about eliminating the mental toll from having to context switch(i know it sounds ai, reading so much ai text has gotten to me)

kstrauser 5 hours ago|||
That’s an excellent way to explain it. I’m already in the shell doing stuff. Whenever I can stay there without sacrificing usability, it’s a big boost.
irl_zebra 7 hours ago|||
"It's not about saving time, it's about eliminating the mental toll from having to context switch"

This broke my brain! Woah!

criddell 7 hours ago|||
> an interactive page that can be navigated around with the keyboard

Or mouse / trackpad.

I really haven't seen anything better for making TUIs than Borland's Turbo Vision framework from 35ish years ago.

GCUMstlyHarmls 7 hours ago||||
Eg: lazygit https://github.com/jesseduffield/lazygit?tab=readme-ov-file#... https://github.com/sxyazi/yazi https://github.com/darrenburns/posting or I guess Vim would be a prominent example.

Peoples definitions will be on a gradient, but its somewhere between CLI (type into a terminal to use) and GUI (use your mouse in a windowing system), TUI runs in your terminal like a CLI but probably supports "graphical widgets" like buttons, bars, hotkeys, panes, etc.

giglamesh 7 hours ago||
So the acronym is for Terrible User Interface? ;)
worksonmine 6 hours ago|||
TUI is peak UI, anyone who disagrees just don't get it. Every program listens to the same keybindings, looks the same and are composable to work together. You don't get that clicking buttons with the mouse. It's built to get the work done not look pretty.
allarm 7 hours ago|||
No it's not.
layer8 4 hours ago||||
https://en.wikipedia.org/wiki/Text-based_user_interface
ses1984 7 hours ago||||
Terminal UI.
booleandilemma 7 hours ago||||
It's definitely an acronym that got popular in the last year or so, though I'm sure there are people out there who will pretend otherwise. I've been in the industry 15+ years now and never heard it before. Previously it was just UI, GUI, or CLI.
freedomben 6 hours ago|||
It's gotten more popular for sure, but it's definitely been around a long time. Even just on HN there have been conversation about gdb tui ever since I've been here (starting browsing HN around 2011). For anyone who works in embedded systems it's a very common term and has been since I got into it in 2008-ish. I would guess it was much more of a linux/unix user thing until recently though, so people on windows and mac probably rarely if ever intersected with the term, so that's definitely a change. Just my observations.
0x1ch 5 hours ago||||
My friends and I have been actively in the "CLI/TUI" since middle school. Anyone tinkering on linux that used tiling window managers is already very familiar with the domain.
snozolli 6 hours ago|||
As someone who came up using Borland's Turbo Pascal, Turbo C, and Turbo Vision (their OOP UI framework), it was called CUI (character-based user interface) to distinguish from GUI, which became relevant as Windows became dominant.

I never heard "TUI" until the last few years, but it may be due to my background being Microsoft-oriented.

One of the only references I can find is the PC Magazine encyclopedia: https://www.pcmag.com/encyclopedia/term/cui

KPGv2 7 hours ago|||
[flagged]
TarqDirtyToMe 7 hours ago|||
They aren’t the same thing. TUI refers to interactive ncurses-like interfaces. Vim has a TUI, ls does not

I’m fairly certain this terminology has been around since at least the early aughts.

cristoperb 7 hours ago||||
I don't know when the term became widespread for gui-style terminal programs, but the wikipedia entry has existed for more than 20 years so I think it is an older term than you imply.

https://en.wikipedia.org/w/index.php?title=Text-based_user_i...

philiplu 7 hours ago||||
Sorry, but this 65 yo grey-beard disagrees. A TUI to me, back in the 80s/90s, was something that ran in the terminal and was almost always ncurses-based. This was back when I was still using ADM-3A serial terminals, none of that new-fangled PCs stuff.
bombcar 7 hours ago|||
Exactly. A CLI is a single line - like edlin. A TUI takes over all or most of the screen, like edit or vi or emacs.

Norton Commander (or Midnight Commander) is probably the quintessential example of a powerful TUI; it can do things that would be quite hard to replicate as easily in a CLI.

KPGv2 7 hours ago|||
We might've been caught on different parts of the wave. I checked Ngrams out of curiosity

https://books.google.com/ngrams/graph?content=TUI&year_start...

Basically it was never used, then it was heavily used, and then never used, and then in the early 00s it took off again.

That'd explain why you used it, I never did, and now young kids are.

marssaxman 6 hours ago|||
Thanks for looking that up! It makes sense, of course - the line starts to drop in 1984, with the release of the Macintosh, and hits a trough around the launch of Windows 95.

It's not a term I recall hearing at all when I started using computers in the mid-'80s - all that mattered back then was "shiny new GUI, or the clunky old thing?" I really thought it was a retroneologism when I first heard it, maybe twenty years ago.

rjmunro 5 hours ago|||
I don't think that search is very valid - the TUI group travel companies are likely much more mentioned than Terminal User Interface. They are pretty big around the world and have an airline, cruises, hotels etc.
john_strinlai 7 hours ago|||
[dead]
kqr 6 hours ago|||
In the case of Git, I can warmly recommend Magit as a TUI. Not only does it make frequent operations easier and rare operations doable -- it also teaches you Git!

I have a draft here about one aspect of Magit I enjoy: https://entropicthoughts.com/rebasing-in-magit

Trufa 7 hours ago|||
The amount of little tools I'm creating for myself is incredible, 4.6 seems like it can properly one/two shot it now without my attention.

Did you open source that one? I was thinking of this exact same thing but wanted to think a little about how to share deps, i.e. if I do quick worktree to try a branch I don't wanna npm i that takes forever.

Also, if you share it with me, there's obviously no expectations, even it's a half backed vibecoded mess.

whazor 5 hours ago|||
This is how I solve the dependencies with Nix: https://gist.github.com/whazor/bca8b687e26081e77d818bc26033c...

Nix helps Claude a lot with dependencies, it can add stuff and execute the flake as well.

I will come back to you with project itself.

unshavedyak 6 hours ago||||
I’ve been wanting similar but have instead been focused on GUI. My #1 issue with TUI is that I’ve never liked code jumps very smooth high fps fast scrolling. Between that and terminal lacking variable font sizes, I’d vastly prefer TUIs, but I just struggle to get over those two issues.

I’ve been entirely terminal based for 20 years now and those issues have just worn me down. Yet I still love terminal for its simplicity. Rock and a hard place I guess.

SauntSolaire 7 hours ago||||
What's the point of open sourcing something you one shot with an LLM? At that point just open source the prompt you used to generate it.
freedomben 6 hours ago|||
Testing. If you share something you've tested and know works, that's way better than sharing a prompt which will generate untested code which then has to be tested. On top of that it seems wasteful to burn inference compute (and $) repeating the same thing when the previous output would be superior anyway.

That said, I do think it would be awesome if including prompts/history in the repos somehow became a thing. Not only would it help people learn and improve, but it would allow tweaking.

NetOpWibby 6 hours ago|||
To save time and energy?
elliotbnvl 7 hours ago|||
The deps question is huge, let me know if you solve it.
CalebJohn 6 hours ago||
If I'm understanding the problem correctly, this should be solved by pnpm [1]. It stores packages in a global cache, and hardlinks to the local node_packages. So running install in a new worktree should be instant.

[1]: https://pnpm.io/motivation

hattmall 7 hours ago|||
What are some examples of useful TUI you made? I'm generally opposed to the concept
lionkor 8 hours ago||
That sounds like a complete waste of time and tokens to me, what is the benefit? So each time you do something, you let Claude one shot a tui? This seems like a waste of compute and your time
htnthrow11220 7 hours ago|||
They said each time they want something to be easier, not each time they do something. And they didn’t mention it has to be one-shot. You might have read too quickly and you’ve responded to something that didn’t actually exist.
MarsIronPI 7 hours ago||||
On the contrary. Once these tools exist they exist forever, independently of Claude or a Claude Code subscription. IMO this is the best way to use AI for personal use.
bmacho 7 hours ago||||
Now that I think about it, if Claude can put most useful functions in a TUI and make them discoverable (show them in a list), than this could be better than asking for one-liners (and forgetting them) every single time.

Maybe I'll try using small TUI too.

duneisagoodbook 7 hours ago|||
yeah! they should focus on more productive pursuits, like telling people online what to do with their time and resources.
morissette 7 hours ago||
And these are things outside of our control.
parliament32 8 hours ago||
So effectively "I just discovered xargs"? Not to disparage OP but there isn't anything particularly novel here.
Someone1234 7 hours ago||
This feels like gatekeeping someone sharing something cool they've recently learned.

I personally lean more towards the "let's share cool little productivity tips and tricks with one another" instead of the "in order to share this you have to meet [entirely arbitrary line of novelty/cleverness/originality]."

But each to their own I suppose. I wonder how you learned about using xargs? Maybe a blog-post or article not dissimilar to this one?

parliament32 6 hours ago|||
I don't think there's anything wrong with sharing something cool, even if it's trivial to other people. The problem is framing a blog post with "ooh this was buried in the secret leaked CIA material".. and then the reader opens it to find out it's just xargs. It feels very clickbaity. Akin to "here's one simple trick to gain a treasure trove of information about all the secret processes running on your system!!" and it's just ps.
audience_mem 2 hours ago||
It felt almost like satire to me, especially with the name "ciaclean".
superxpro12 7 hours ago||||
No I agree with you. This whole aura of "well IIIII knew this and YOUUUUU didnt" needs to die. I get that it's sometimes redundant and frustrating to encounter the same question a few times... but there's always new people learning in this world, and they deserve a chance to learn too.

Why do people constantly have to be looking for any way to justify their sense of superiority over others? Collaborative attitudes are so much better for all involved.

jimmydoe 7 hours ago|||
And they have to learn that from cia?

That says so much about the generation we are in, just don’t go to school but learn math from mafia

vntok 7 hours ago||
Where else would you learn about triple-entry bookkeeping?
oldestofsports 1 hour ago|||
It's cool that it comes from CIA, and someone who doesn't know about xargs may just learn something new. What is not to like?
ggrab 4 hours ago|||
Lots of negative sentiment on your comment, but I was going to write the same. Hopefully AI won’t make us forget that good command line tools are designed to be chained together if you want to achieve something that’s perhaps too niche as a use case to make it into a native command. It’s worth learning about swiss army utilities like xargs that make this easy (and fun)
mrexcess 2 hours ago|||
Seriously, this seems like someone in awe of xargs. Maybe its the Bell Labs in me but this is boilerplate stuff.
skydhash 7 hours ago|||
People really do need to read the “Unix Power Tools” book and realize their problem has been solved for decades.
gosub100 7 hours ago||
"People just need to find the info they don't know about, so then they'll know it."
gavinray 5 hours ago|||
I don't find that the insinuation of the parent comment at all.

Saying "If you read X book, you'll realize it's a solved problem" IS the information -- the name of the book you need to read

SoftTalker 5 hours ago|||
People need to be curious. Then they seek out the info they don't know about.
cgfjtynzdrfht 7 hours ago||
[dead]
jo-m 7 hours ago||
I have something similar, but open fzf to select the branches to delete [1].

    function fcleanb -d "fzf git select branches to delete where the upstream has disappeared"
        set -l branches_to_delete (
            git for-each-ref --sort=committerdate --format='%(refname:lstrip=2) %(upstream:track)' refs/heads/ | \
            egrep '\[gone\]$' | grep -v "master" | \
            awk '{print $1}' | $_FZF_BINARY --multi --exit-0 \
        )

        for branch in $branches_to_delete
            git branch -D "$branch"
        end
    end
[1]: https://github.com/jo-m/dotfiles/blob/29d4cab4ba6a18dc44dcf9...
arusahni 7 hours ago||
I use this alias:

    prune-local = "!git fetch -p && for branch in $(git branch -vv | awk '/: gone]/{if ($1!=\"\*\") print $1}'); do git branch -d $branch; done"
1. Fetch the latest from my remote, removing any remote tracking branches that no longer exist

2. Enumerate local branches, selecting each that has been marked as no longer having a remote version (ignoring the current branch)

3. Delete the local branch safely

More comments...