[1] https://stackoverflow.com/questions/55998614/merge-made-by-r...
If there was a better way to handle "I needed to merge in the middle of my PR work" without introducing reverse merged permanently in the history I wouldn't mind merge commits.
But tools will sometimes skip over others work if you `git pull` a change into your local repo due to getting confused which leg of the merge to follow.
That is my main problem with merge, I think the commit ballooning is annoying too but that is easier to ignore.
https://www.mercurial-scm.org/pipermail/mercurial/2012-Janua...
I really don't see any downside to recommending mercurial in 2026. Git isn't just inferior as a VCS in the subjective sense of "oh… I don't like this or that inconsistent aspect of its UI", but in very practical and meaningful ways (on technical merit) that are increasingly forgotten about the more it solidifies as a monopoly:
- still no support for branches (in the traditional sense, as a commit-level marker, to delineate series of related commits) means that a branchy-DAG is border-line useless, and tools like bisect can't use the info to take you at the series boundaries
- still no support for phasing (to mark which commits have been exchanged or are local-only and safe to edit)
- still no support for evolve (to record history rewrites in a side-storage, making concurrent/distributed history rewrites safe and mostly automatic)
> Jujutsu keeps track of conflicts as first-class objects in its model; they are first-class in the same way commits are, while alternatives like Git simply think of conflicts as textual diffs. While not as rigorous as systems like Darcs (which is based on a formalized theory of patches, as opposed to snapshots), the effect is that many forms of conflict resolution can be performed and propagated automatically.
This resolves any number of heads, but the resulting tree of the merge is always
that of the current branch head, effectively ignoring all changes from all other
branches. It is meant to be used to supersede old development history of side
branches. Note that this is different from the -Xours option to the ort merge strategy.[1]: https://git-scm.com/docs/merge-strategies#Documentation/merg...
> There already is reset hard.
That's not... remotely relevant? What does that have to do with merging? We're talking about merging.
I also "mean what I wrote". Man that was sure easy to say. It's almost like saying nothing at all. Which is anyone's righ to do, but it's not an argument, nor a definition of terms, nor communication at all. Well, it does communicate one thing.
> don't try to resolve any merge conflicts ... Don't try to "help". Don't fuck with the index or the worktree.
... certainly is "nothing" in the literal sense--that that's what is desired of git-merge to do, but it's not "nothing" in the sense that you're saying.
git reset --hard has nothing to do with merging. Nothing. They're not even in the same class of operations. It's absolutely irrelevant to this use case. And saying so isn't "not an argument" or not communicating anything at all. git reset --hard does not in any sense effect a merge. What more needs to be (or can be) said?
If you want someone to help explain something to you, it's up to you to give them an anchor point that they can use to bridge the gap in understanding. As it stands, it's you who's given nothing at all, so one can only repeat what has already been described--
A resolution strategy for merge conflicts that involves doing nothing: nothing to the files in the current directory, staging nothing to be committed, and in fact not even bothering to check for conflicts in the first place. Just notate that it's going to be a merge between two parents X and Y, and wait for the human so they have an opportunity to resolve the conflicts by hand (if they haven't already), for them to add the changes to the staging area, and for them to issue the git-commit command that completes the merge between X and Y. What's unclear about this?
git merge -s ours --no-ff --no-commit <branch>
This will initiate a merge, take nothing from the incoming branch, and allow you to decide how to proceed. This leaves git waiting for your next commit, and the two branches will be considered merged when that commit happens. What you may want to do next is: git checkout -p <branch>
This will interactively review each incoming change, giving you the power to decide how each one should be handled. Once you've completed that process, commit the result and the merge is done.Give me normal boring git merges over git squash merges.
I always forget all the flags and I work with literally just: clone, branch, checkout, push.
(Each feature is a fresh branch tho)
Take out the last "/timeline" component of the URL to clone via Fossil: https://chiselapp.com/user/chungy/repository/test/timeline
See also, the upstream documentation on branches and merging: https://fossil-scm.org/home/doc/trunk/www/branching.wiki
For others, I highly recommend Git from the Bottom Up[1]. It is a very well-written piece on internal data structures and does a great job of demystifying the opaque git commands that most beginners blindly follow. Best thing you'll learn in 20ish minutes.
[0]: https://tom.preston-werner.com/2009/05/19/the-git-parable
Ends up being circular if the author used LLM help for this writeup though there are no obvious signs of that.
Maybe that's obvious to most people, but it was a bit surprising to see it myself. It feels weird to think that LLMs are being trained on my code, especially when I'm painfully aware of every corner I'm cutting.
The article doesn't contain any LLM output. I use LLMs to ask for advice on coding conventions (especially in rust, since I'm bad at it), and sometimes as part of research (zstd was suggested by chatgpt along with comparisons to similar algorithms).
(block_ai) {
@ai_bots {
header_regexp User-Agent (?i)(anthropic-ai|ClaudeBot|Claude-Web|Claude-SearchBot|GPTBot|ChatGPT-User|Google-Extended|CCBot|PerplexityBot|ImagesiftBot)
}
abort @ai_bots
}
Then, in a specific app block include it via import block_aiblocking openai ips did wonders for the ambient noise levels in my apartment. they're not the only ones obviously, but they're they only ones i had to block to stay sane
A kind of "they found this code, therefore you have a duty not to poison their model as they take it." Meanwhile if I scrape a website and discover data I'm not supposed to see (e.g. bank details being publicly visible) then I will go to jail for pointing it out. :(
Living in a country with hundreds of millions of other civilians or a city with tens of thousands means compromising what you're allowed to do when it affects other people.
There's a reason we have attractive nuisance laws and you aren't allowed to put a slide on your yard that electrocutes anyone who touches it.
None of this, of course, applies to "poisoning" llms, that's whatever. But all your examples involved actual humans being attacked, not some database.
> It feels weird to think that LLMs are being trained on my code, especially when I'm painfully aware of every corner I'm cutting.
That's very much expected. That's why the quality of LLM coding agents is like it is. (No offense.)
The "asking LLMs for advice" part is where the circular aspect starts to come into the picture. Not worse than looking at StackOverflow though which then links to other people who in turn turned to StackOverflow for advice.
For most throughout history, whatever is presented to you that you believe is the right answer. AI just brings them source information faster so what you're seeing is mostly just the usual behavior, but faster. Before AI people would not have bothered to try and figure out an answer to some of these questions. It would've been too much work.
Great argument for not using AI-assisted tools to write blog posts (especially if you DO use these tools). I wonder how much we're taking for granted in these early phases before it starts to eat itself.
One of the funniest things I've started to notice from Gemini in particular is that in random situations, it talks with english with an agreeable affect that I can only describe as.. Indian? I've never noticed such a thing leak through before. There must be a ton of people in India who are generating new datasets for training.
I wish I could find it again, if someone else knows the link please post it!
This part made me laugh though:
> These detectors, as I understand them, often work by measuring two key things: ‘Perplexity’ and ‘burstiness’. Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."
I can't be the only one who's brain predicted "mat" ?
I do know that LLMs generate content heavy with those constructs, but they didn't create the ideas out of thin air, it was in the training set, and existed strongly enough that LLMs saw it as common place/best practice.
Bookmarked for later
Notable differences: E2E encryption, parallel imports (Got will light up all your cores), and a data structure that supports large files and directories.
Yeah, totally agree. Got has not solved conflict resolution for arbitrary files. However, we can tell the user where the files differ, and that the file has changed.
There is still value in being able to import files and directories of arbitrary sizes, and having the data encrypted. This is the necessary infrastructure to be able to do distributed version control on large amounts of private data. You can't do that easily with Git. It's very clunky even with remote helpers and LFS.
I talk about that in the Why Got? section of the docs.
I think theoratically, Git delta-compression is still a lot more optimized for smaller repos. But for bigger repos where sharding storaged is required, path-based delta dictionary compression does much better. Git recently (in the last 1 year) got something called "path-walk" which is fairly similar though.
I know this is only meant to be an educational project, but please avoid yaml (especially for anything generated). It may be a superset of json, but that should strongly suggest that json is enough.
I am aware I'm making a decade old complaint now, but we already have such an absurd mess with every tool that decided to prefer yaml (docker/k8s, swagger, etc.) and it never got any better. Let's not make that mistake again.
People just learned to cope or avoid yaml where they can, and luckily these are such widely used tools that we have plenty of boilerplate examples to cheat from. A new tool lacking docs or examples that only accepts yaml would be anywhere from mildly frustrating to borderline unusable.
I had a go at it as well a while back, I call it "shit" https://github.com/emanueldonalds/shit
It's only in the context of recreating Git that this comment makes sense.