Top
Best
New

Posted by TonyStr 1/27/2026

I made my own Git(tonystr.net)
380 points | 176 comments
nasretdinov 1/27/2026|
Nice work! On a complete tangent, Git is the only SCM known to me that supports recursive merge strategy [1] (instead of the regular 3-way merge), which essentially always remembers resolved conflicts without you needing to do anything. This is a very underrated feature of Git and somehow people still manage to choose rebase over it. If you ever get to implementing merges, please make sure you have a mechanism for remembering the conflict resolution history :).

[1] https://stackoverflow.com/questions/55998614/merge-made-by-r...

arunix 1/27/2026||
I remember in a previous job having to enable git rerere, otherwise it wouldn't remember previously resolved conflicts.

https://git-scm.com/book/en/v2/Git-Tools-Rerere

nasretdinov 1/27/2026|||
I believe rerere is a local cache, so you'd still have to resolve the conflicts again on another machine. The recursive merge doesn't have this issue — the conflict resolution inside the merge commits is effectively remembered (although due to how Git operates it actually never even considers it a conflict to be remembered — just a snapshot of the closest state to the merged branches)
Guvante 1/28/2026||
Are people repeatedly handling merge conflicts on multiple machines?

If there was a better way to handle "I needed to merge in the middle of my PR work" without introducing reverse merged permanently in the history I wouldn't mind merge commits.

But tools will sometimes skip over others work if you `git pull` a change into your local repo due to getting confused which leg of the merge to follow.

nasretdinov 1/28/2026||
One place where it mattered was when I was working on a large PHP web site, where backend devs and frontend devs would be working in the same branch — this way you don't have to go back and forth to get the new API, and this workflow was quite unique and, in my mind, quite efficient. The branchs also could live for some time (e.g. in case of large refactorings), and it's a good idea to merge in the master branch frequently, so recursive merge was really nice. Nowadays, of course, you design the API for your frontend, mobile, etc, upfront, so there's little reason to do that anymore.
Guvante 5 days ago||
Honestly if the tooling were better at keeping upstream on the left I wouldn't mind as much but IIRC `git pull` puts your branch on the left which means walking history requires analysing each merge commit to figure out where history actually is vs where a temporary branch is.

That is my main problem with merge, I think the commit ballooning is annoying too but that is easier to ignore.

direwolf20 1/27/2026||||
The recursive merge is about merging branches that already have merges in them, while rerere is about repeating the same merge several times.
pyrolistical 1/27/2026||||
Would be nice if centralized git platforms shared rerere caches
lmm 1/28/2026|||
Rerere is dangerous and counterproductive - it tries to give rebase the same functionality that merge has, but since rebase is fundamentally wrong it only stacks the wrongness.
seba_dos1 1/28/2026||
Cherry-picks being "fundamentally wrong" is certainly an interesting git take.
ezst 1/27/2026|||
On recursive merging, by the author of mercurial

https://www.mercurial-scm.org/pipermail/mercurial/2012-Janua...

nasretdinov 1/28/2026||
Yeah, the point about high complexity of the recursive merge is valid, and that's what I would expect from the Mercurial devs too. I personally find it a bit unfortunate that Git ended up winning tbh, but since it did, I think it makes sense to at least cherish what it has out of the box :)
ezst 1/28/2026||
In some ways, the legacy of mercurial lives through jujutsu/jj and offers some sanity and familiarity on top of git's UI. But with that said, mercurial is far from dead, major "under-the-hood" works are going strong (including a rewrite in rust), the hosting situation is getting good with heptapod (a branch of gitlab with native mercurial support).

I really don't see any downside to recommending mercurial in 2026. Git isn't just inferior as a VCS in the subjective sense of "oh… I don't like this or that inconsistent aspect of its UI", but in very practical and meaningful ways (on technical merit) that are increasingly forgotten about the more it solidifies as a monopoly:

- still no support for branches (in the traditional sense, as a commit-level marker, to delineate series of related commits) means that a branchy-DAG is border-line useless, and tools like bisect can't use the info to take you at the series boundaries

- still no support for phasing (to mark which commits have been exchanged or are local-only and safe to edit)

- still no support for evolve (to record history rewrites in a side-storage, making concurrent/distributed history rewrites safe and mostly automatic)

mkleczek 1/27/2026|||
Much more principled (and hence less of a foot-gun) way of handling conflicts is making them first class objects in the repository, like https://pijul.org does.
jcgl 1/27/2026|||
Jujutsu too[0]:

> Jujutsu keeps track of conflicts as first-class objects in its model; they are first-class in the same way commits are, while alternatives like Git simply think of conflicts as textual diffs. While not as rigorous as systems like Darcs (which is based on a formalized theory of patches, as opposed to snapshots), the effect is that many forms of conflict resolution can be performed and propagated automatically.

[0] https://github.com/jj-vcs/jj

PunchyHamster 1/27/2026||||
I feel like people making new VCSes should just re-use GIT storage/network layer and innovate on top of that. Git storage is flexible enough for that, and that way you can just.... use it on existing repos with very easy migration path for both workflows (CI/CD never need to care about what frontend you use) and users
zaphar 1/27/2026|||
Git storage is just a merkle tree. It's a technology that's been around forever and was simultaneously chosen by more than one vcs technology around the same time. It's incredibly effective so it makes sense that it would get used.
storystarling 1/27/2026||||
The bottleneck with git is actually the on-the-fly packfile generation. The server has to burn CPU calculating deltas for every clone. For a distributed system it seems much better to use a simple content-addressable store where you just serve static blobs.
3eb7988a1663 1/28/2026|||
It is my understanding that under the hood, the repository has quite a bit of state that can get mangled. That is why naively syncing a git repo with say Dropbox is not a surefire operation.
theLiminator 1/27/2026|||
It's very cool though I imagine it's doa due to lack of git compatibility...
speed_spread 1/27/2026||
Lack of current-SCM incumbent compatibility can be an advantage. Like Linus decided to explicitly do the reverse of every SVN decision when designing git. He even reversed CLI usability!
rob74 1/27/2026|||
Pssst! I think Linus didn't as much design Git as he cloned BitKeeper (or at least the parts of it he liked). I have never used it, but if you look at the BitKeeper documentation, it sounds strangely familiar: https://www.bitkeeper.org/testdrive.html . Of course, that made sense for him and for the rest of the Linux developers, as they were already familiar with BitKeeper. Not so much for the rest of us though, who are now stuck with the usability (or lack thereof) you mentioned...
theLiminator 1/27/2026|||
I think the network effects of git is too large to overcome now. Hence why we see jj get a lot more adoption than pijul.
pwdisswordfishy 1/27/2026|||
New to me was discovering within the last month that git-merge doesn't have a merge strategy of "null": don't try to resolve any merge conflicts, because I've already taken care of them; just know that this is a merge between the current branch and the one specified on the command-line, so be a dutiful little tool and just add it to your records. Don't try to "help". Don't fuck with the index or the worktree. Just record that this is happening. That's it. Nothing else.
valleyer 1/28/2026|||
Doesn't `git merge -s ours` do this?

    This resolves any number of heads, but the resulting tree of the merge is always
    that of the current branch head, effectively ignoring all changes from all other
    branches. It is meant to be used to supersede old development history of side
    branches. Note that this is different from the -Xours option to the ort merge strategy.
Brian_K_White 1/28/2026|||
What does that even mean? There already is reset hard.
kbolino 1/28/2026|||
The name "null" is confusing; you have to pick something. However, I think what is desired here is the "theirs" strategy, i.e. to replace the current branch's tree entirely with the incoming branch's tree. The end result would be similar to a hard reset onto the incoming branch, except that it would also create a merge commit. Unfortunately, the "theirs" strategy does not exist, even though the "ours" strategy does exist, apparently to avoid confusion with the "theirs" option [1], but it is possible to emulate it with a sequence of commands [2].

[1]: https://git-scm.com/docs/merge-strategies#Documentation/merg...

[2]: https://stackoverflow.com/a/4969679/814422

pwdisswordfishy 1/28/2026|||
What do you mean, "What does it mean?" It means what I wrote.

> There already is reset hard.

That's not... remotely relevant? What does that have to do with merging? We're talking about merging.

Brian_K_White 1/28/2026||
Neither of these are answers or explainations. So you said nothing, and then said nothing again.

I also "mean what I wrote". Man that was sure easy to say. It's almost like saying nothing at all. Which is anyone's righ to do, but it's not an argument, nor a definition of terms, nor communication at all. Well, it does communicate one thing.

pwdisswordfishy 1/28/2026||
This:

> don't try to resolve any merge conflicts ... Don't try to "help". Don't fuck with the index or the worktree.

... certainly is "nothing" in the literal sense--that that's what is desired of git-merge to do, but it's not "nothing" in the sense that you're saying.

git reset --hard has nothing to do with merging. Nothing. They're not even in the same class of operations. It's absolutely irrelevant to this use case. And saying so isn't "not an argument" or not communicating anything at all. git reset --hard does not in any sense effect a merge. What more needs to be (or can be) said?

If you want someone to help explain something to you, it's up to you to give them an anchor point that they can use to bridge the gap in understanding. As it stands, it's you who's given nothing at all, so one can only repeat what has already been described--

A resolution strategy for merge conflicts that involves doing nothing: nothing to the files in the current directory, staging nothing to be committed, and in fact not even bothering to check for conflicts in the first place. Just notate that it's going to be a merge between two parents X and Y, and wait for the human so they have an opportunity to resolve the conflicts by hand (if they haven't already), for them to add the changes to the staging area, and for them to issue the git-commit command that completes the merge between X and Y. What's unclear about this?

kbolino 1/28/2026|||
I think this is what you want:

  git merge -s ours --no-ff --no-commit <branch>
This will initiate a merge, take nothing from the incoming branch, and allow you to decide how to proceed. This leaves git waiting for your next commit, and the two branches will be considered merged when that commit happens. What you may want to do next is:

  git checkout -p <branch>
This will interactively review each incoming change, giving you the power to decide how each one should be handled. Once you've completed that process, commit the result and the merge is done.
seba_dos1 5 days ago|||
You know that you can edit your merge commits any way you want and you don't have to rely on resolution strategies to do it for you, right?
pwdisswordfishy 3 days ago||
Right. That's the entire basis for the discussion here. So why is this a question?
seba_dos1 3 days ago||
Because you already have all the needed tools to handle your special little edge case (in multiple ways!), so the discussion seems rather pointless.
pwdisswordfishy 3 days ago||
You are confused. It's frightening that someone would be able to reach a point this deep into the discussion and think that "You know that you can edit your merge commits any way you want and you don't have to rely on resolution strategies to do it for you" is revealing something new or insightful.
seba_dos1 2 days ago||
So it is pointless indeed, gotcha.
pwdisswordfishy 2 days ago||
Your zero-insight comment was, indeed, pointless.
giancarlostoro 1/27/2026|||
I hate git squash, it only goes one direction and personally I dont give a crap if it took you 100 commits to do one thing, at least now we can see what you may have tried so we dont repeat your mistakes. With git squash it all turns into, this is what they last did that mattered, and btw we cant merge it backwards without it being weird, you have to check out an entirely new branch. I like to continue adding changes to branches I have already merged. Not every PR is the full solution, but a piece of the puzzle. No one can tell me that they only need 1 PR per task because they never have a bug, ever.

Give me normal boring git merges over git squash merges.

p0w3n3d 1/27/2026|||
That's something new to me (using git for 10 years, always rebased)
iberator 1/27/2026||
I'm even more lazy. I almost always clone from scratch after merging or after not touching the project for some time. So easy and silly :)

I always forget all the flags and I work with literally just: clone, branch, checkout, push.

(Each feature is a fresh branch tho)

chungy 1/27/2026||
as far as I understand the problem (sorry, the SO isn't the clearest around), Fossil should support this operation. It does one better, since it even tracks exactly where merges come from. In Git, you have a merge commit that shows up with more than one parent, but Fossil will show you where it branched off too.

Take out the last "/timeline" component of the URL to clone via Fossil: https://chiselapp.com/user/chungy/repository/test/timeline

See also, the upstream documentation on branches and merging: https://fossil-scm.org/home/doc/trunk/www/branching.wiki

darkryder 1/27/2026||
Great writeup! It's always fun to learn the details of the tools we use daily.

For others, I highly recommend Git from the Bottom Up[1]. It is a very well-written piece on internal data structures and does a great job of demystifying the opaque git commands that most beginners blindly follow. Best thing you'll learn in 20ish minutes.

1. https://jwiegley.github.io/git-from-the-bottom-up/

MarsIronPI 1/27/2026||
Oh, I hadn't ever seen that one. I "grokked" Git thanks to The Git Parable[0] several years ago.

[0]: https://tom.preston-werner.com/2009/05/19/the-git-parable

spuz 1/27/2026|||
Thanks - I think this is the article I was thinking of that really helped me to understand git when I first started using it back in the day. I tried to find it again and couldn't.
sanufar 1/27/2026||
Ooh, this looks fun! I didn’t know you could cat-file on a hash id, that’s actually quite cool.
teiferer 1/27/2026||
If you ever wonder how coding agents know how to plan things etc, this is the kind of article they get this training from.

Ends up being circular if the author used LLM help for this writeup though there are no obvious signs of that.

TonyStr 1/27/2026||
Interestingly, I looked at github insights and found that this repo had 49 clones, and 28 unique cloners, before I published this article. I definitely did not clone it 49 times, and certainly not with 28 unique users. It's unlikely that the handful of friends who follow me on github all cloned the repo. So I can only speculate that there are bots scraping new public github repos and training on everything.

Maybe that's obvious to most people, but it was a bit surprising to see it myself. It feels weird to think that LLMs are being trained on my code, especially when I'm painfully aware of every corner I'm cutting.

The article doesn't contain any LLM output. I use LLMs to ask for advice on coding conventions (especially in rust, since I'm bad at it), and sometimes as part of research (zstd was suggested by chatgpt along with comparisons to similar algorithms).

tonnydourado 1/27/2026|||
Particularly on GitHub, might not even be LLMs, just regular bots looking for committed secrets (AWS keypairs, passwords, etc.)
Phelinofist 1/27/2026||||
I selfhost Gitea. The instance is crawled by AI crawlers (checked the IPs). They never cloned, they just browse and take it directly from there.
Phelinofist 1/27/2026|||
For reference, this is how I do it in my Caddyfile:

   (block_ai) {
       @ai_bots {
           header_regexp User-Agent (?i)(anthropic-ai|ClaudeBot|Claude-Web|Claude-SearchBot|GPTBot|ChatGPT-User|Google-Extended|CCBot|PerplexityBot|ImagesiftBot)
       }

       abort @ai_bots
   }
Then, in a specific app block include it via

   import block_ai
seba_dos1 1/28/2026|||
Most of then pretend to be real users though and don't identify themselves with their user agent strings.
zaphar 1/27/2026|||
I have almost exactly this in my own caddyfile :-D The order of the items in the regex is a little different but mostly the same items. I just pulled them from my web access logs over time and update it every once in a while.
Zambyte 1/27/2026|||
i run a cgit server on an r720 in my apartment with my code on it and that puppy screams whenever sam wants his code

blocking openai ips did wonders for the ambient noise levels in my apartment. they're not the only ones obviously, but they're they only ones i had to block to stay sane

MarsIronPI 1/27/2026||
Have you considered putting it behind Anubis or an equivalent?
Zambyte 1/27/2026||
Yes, but I haven't and would prefer not to
MarsIronPI 1/27/2026||
Understandable. It's an outrage that we even have to consider such measures.
nerdponx 1/27/2026||||
Time to start including deliberate bugs. The correct version is in a private repository.
teiferer 1/27/2026|||
And what purpose would this serve, exactly?
adastra22 1/27/2026||
Spite.
below43 1/27/2026||||
They used to do this with maps - eg. fake islands - to pick up when they were copied.
program_whiz 1/27/2026|||
while I think this is a fun idea -- we are in such a dystopian timeline that I fear you will end up being prosecuted under a digital equivalent of various laws like "why did you attack the intruder instead of fleeing" or "you can't simply remove a squatter because its your house, therefore you get an assault charge."

A kind of "they found this code, therefore you have a duty not to poison their model as they take it." Meanwhile if I scrape a website and discover data I'm not supposed to see (e.g. bank details being publicly visible) then I will go to jail for pointing it out. :(

nerdponx 1/27/2026|||
I think if we're at the point where posting deliberate mistakes to poison training data is considered a crime, we would be far far far down the path of authoritarian corporate regulatory capture, much farther than we are now (fortunately).
wredcoll 1/27/2026|||
Look, I get the fantasy of someday pulling out my musket^W ar15 and rushing downstairs to blow away my wife^W an evil intruder, but, like, we live in a society. And it has a lot of benefits, but it does mean you don't get to be "king of your castle" any more.

Living in a country with hundreds of millions of other civilians or a city with tens of thousands means compromising what you're allowed to do when it affects other people.

There's a reason we have attractive nuisance laws and you aren't allowed to put a slide on your yard that electrocutes anyone who touches it.

None of this, of course, applies to "poisoning" llms, that's whatever. But all your examples involved actual humans being attacked, not some database.

program_whiz 1/27/2026||
Thanks that was the term I was looking for "attractive nuisance". I wouldn't be surprised if a tech company could make that case -- this user caused us tangible harm and cost (training, poisoned models) and left their data out for us to consume. Its the equivalent of putting poison candy on a park table your honor!
teo_zero 1/27/2026||
That reminds me of the protagonist of Charles Stross's novel "Accelerando", a prolific inventor who is accused by the IRS to have caused millions of losses because he releases all his ideas in the public domain instead of profiting from them and paying taxes on such profits.
0x696C6961 1/27/2026||||
This has been happening before LLMs too.
teiferer 1/27/2026|||
I don't really get why they need to clone in order to scrape ...?

> It feels weird to think that LLMs are being trained on my code, especially when I'm painfully aware of every corner I'm cutting.

That's very much expected. That's why the quality of LLM coding agents is like it is. (No offense.)

The "asking LLMs for advice" part is where the circular aspect starts to come into the picture. Not worse than looking at StackOverflow though which then links to other people who in turn turned to StackOverflow for advice.

storystarling 1/27/2026|||
Cloning gets you the raw text objects directly. If you scrape the web UI you're dealing with a lot of markup overhead that just burns compute during ingestion. For training data you usually want the structure to be as clean as possible from the start.
teiferer 1/28/2026||
Sure, cloning a local copy. But why clone on github?
adastra22 1/27/2026|||
The quality of LLM coding agents is pretty good now.
wasmainiac 1/27/2026|||
Maybe we can poison LLMs with loops of 2 or more self referencing blogs.
jdiff 1/27/2026|||
Only need one, they're not thinking critically about the media they consume during training.
falcor84 1/27/2026|||
Here's a sad prediction: over the coming few years, AIs will get significantly better at critical evaluation of sources, while humans will get even worse at it.
whstl 1/27/2026|||
I wish I could disagree with you, but what I'm seeing on average (especially at work) is exactly that: people asking stuff to ChatGPT and accepting hallucinations as fact, and then fighting me when I say it's not true.
prmoustache 1/27/2026||
There is "death by GPS" for people dying after blindly following their GPS instruction. There will definitely be a "death by AI" expression very soon.
stevekemp 1/27/2026||
Tesla-related fatalities probably count already, albeit without that label/name.
sailfast 1/27/2026||||
Hot take: Humans have always been bad at this (in the aggregate, without training). Only a certain percentage of the population took the time to investigate.

For most throughout history, whatever is presented to you that you believe is the right answer. AI just brings them source information faster so what you're seeing is mostly just the usual behavior, but faster. Before AI people would not have bothered to try and figure out an answer to some of these questions. It would've been too much work.

topaz0 1/27/2026||||
My sad prediction is that LLMs and humans will both get worse. Humans might get worse faster though.
keybored 1/27/2026|||
HN commenters will be technooptimistic misanthrops. Status quo ante bellum.
andy_ppp 1/27/2026||||
The secret sauce about having good understanding, taste and style (both for coding and writing) has always been in the fine tuning and RHLF steps. I'd be skeptical if the signals a few GitHub repos or blogs generate at the initial stages of the learning are that critical. There's probably a filter also for good taste on the initial training set and these are so large not even a single full epoch is done on the data these days.
jama211 1/27/2026|||
It wouldn’t work at all.
jama211 1/27/2026|||
I see the AI hating part of HN has come out again
mexicocitinluez 1/27/2026|||
> Ends up being circular if the author used LLM help for this writeup though there are no obvious signs of that.

Great argument for not using AI-assisted tools to write blog posts (especially if you DO use these tools). I wonder how much we're taking for granted in these early phases before it starts to eat itself.

jama211 1/27/2026||
What does eating itself even look like? It doesn’t take much salt to change a hash.
mexicocitinluez 1/27/2026||
Being trained on it's own results?
jama211 1/28/2026||
Pretty easy to detect for surely
anu7df 1/27/2026|||
I understand model output put back into training would be an issue, but if model output is guided by multiple prompts and edited by the author to his/her liking wouldn't that at least be marginally useful?
prodigycorp 1/27/2026||
Random aside about training data:

One of the funniest things I've started to notice from Gemini in particular is that in random situations, it talks with english with an agreeable affect that I can only describe as.. Indian? I've never noticed such a thing leak through before. There must be a ton of people in India who are generating new datasets for training.

evntdrvn 1/27/2026|||
There was a really great article or blog post published in the last few months about the author's very personal experience whose gist was "People complain that I sound/write like an LLM, but it's actually the inverse because I grew up in X where people are taught formal English to sound educated/western, and those areas are now heavily used for LLM training."

I wish I could find it again, if someone else knows the link please post it!

gxnxcxcx 1/27/2026|||
I'm Kenyan. I don't write like ChatGPT, ChatGPT writes like me

https://news.ycombinator.com/item?id=46273466

tverbeure 1/28/2026|||
Thanks for that link.

This part made me laugh though:

> These detectors, as I understand them, often work by measuring two key things: ‘Perplexity’ and ‘burstiness’. Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."

I can't be the only one who's brain predicted "mat" ?

cozzyd 1/28/2026||
And I thought it would be a hat...
tverbeure 1/29/2026||
No, that would be "in the hat."
evntdrvn 1/29/2026|||
Thank you!!! :)
awesome_dude 1/27/2026|||
I've been critical of people that default to "an em dash being used means the content is generated by an LLM", or, "they've numbered their points, must be an LLM"

I do know that LLMs generate content heavy with those constructs, but they didn't create the ideas out of thin air, it was in the training set, and existed strongly enough that LLMs saw it as common place/best practice.

blenderob 1/27/2026|||
That's very interesting. Any examples you can share which has those agreeable effects?
prodigycorp 1/27/2026||
I'm going to do a cursory look through my antigrav history, i want to find it too. I remember it's primarily in the exclamations of agreement/revelation, and one time expressing concern which I remember were slightly off natural for an american english speaker.
prodigycorp 1/27/2026||
Cant find anything, too many messages telling the agent "please do NOT thosec changes". I'm going to remember to save them going forward.
p4bl0 1/27/2026||
Nice post :). It made me think of ugit: DIY Git in Python [1] which is still by far my favorite of this kind of posts. It really goes deep into Git internals while managing to stay easy to follow along the way.

[1] https://www.leshenko.net/p/ugit/

UltraSane 1/27/2026||
I mapped git operations to Neo4j and it really helped me understand how it works.
TonyStr 1/27/2026|||
This page is beautiful!

Bookmarked for later

mfashby 1/27/2026||
in a similar vein; Write yourself a Git was fun to follow https://wyag.thb.lt/
gkbrk 1/27/2026||
CodeCrafters has an amazing "Build your own Git" [1] tutorial too. Jon Gjengset has a nice video [2] doing this challenge live with Rust.

[1]: https://app.codecrafters.io/courses/git/overview

[2]: https://www.youtube.com/watch?v=u0VotuGzD_w

brendoncarroll 1/27/2026||
Me too. Version control is great, it should get more use outside of software.

https://github.com/gotvc/got

Notable differences: E2E encryption, parallel imports (Got will light up all your cores), and a data structure that supports large files and directories.

rtkwe 1/27/2026||
The problem is when you move beyond text files it gets hard to tell what changes between two versions without opening both versions in whatever program they come from and comparing.
brendoncarroll 1/27/2026||
> The problem is when you move beyond text files it gets hard to tell what changes between two versions without opening both versions in whatever program they come from and comparing.

Yeah, totally agree. Got has not solved conflict resolution for arbitrary files. However, we can tell the user where the files differ, and that the file has changed.

There is still value in being able to import files and directories of arbitrary sizes, and having the data encrypted. This is the necessary infrastructure to be able to do distributed version control on large amounts of private data. You can't do that easily with Git. It's very clunky even with remote helpers and LFS.

I talk about that in the Why Got? section of the docs.

https://github.com/gotvc/got/blob/master/doc/1.1_Why_Got.md

DASD 1/27/2026||
Nice! Not sure if you're aware of Got(Game of Trees) that appears to pre-date your Got.

https://gameoftrees.org/index.html

brendoncarroll 1/27/2026||
Yes the author reached out. There has not yet been a confusion among real users that I am aware of.

https://github.com/gotvc/got/issues/20

sluongng 1/27/2026||
Zstd dictionary compression is essentially how Meta's Mercurial fork (Sapling VCS) stores blobs https://sapling-scm.com/docs/dev/internals/zstdelta. The source code is available in GitHub if folks want to study the tradeoffs vs git delta-compressed packfiles.

I think theoratically, Git delta-compression is still a lot more optimized for smaller repos. But for bigger repos where sharding storaged is required, path-based delta dictionary compression does much better. Git recently (in the last 1 year) got something called "path-walk" which is fairly similar though.

sublinear 1/27/2026||
> If I were to do this again, I would probably use a well-defined language like yaml or json to store object information.

I know this is only meant to be an educational project, but please avoid yaml (especially for anything generated). It may be a superset of json, but that should strongly suggest that json is enough.

I am aware I'm making a decade old complaint now, but we already have such an absurd mess with every tool that decided to prefer yaml (docker/k8s, swagger, etc.) and it never got any better. Let's not make that mistake again.

People just learned to cope or avoid yaml where they can, and luckily these are such widely used tools that we have plenty of boilerplate examples to cheat from. A new tool lacking docs or examples that only accepts yaml would be anywhere from mildly frustrating to borderline unusable.

oldestofsports 1/27/2026||
Nice job, great article!

I had a go at it as well a while back, I call it "shit" https://github.com/emanueldonalds/shit

hahahahhaah 1/27/2026||
Fast Useful Change Keeper
tpoacher 1/27/2026||
THE shit, in fact.
temporallobe 1/27/2026|
Reminds me of when I tried to invent a SPA framework. So much hidden complexity I hadn’t thought of and I found myself going down rabbit holes that I am sure the creators of React and Angular went down. Git seems to be like this and I am often reminded of how impressive it is at hiding underlying complexity.
alsetmusic 1/27/2026|
> at hiding underlying complexity.

It's only in the context of recreating Git that this comment makes sense.

More comments...