Top
Best
New

Posted by kwantaz 10/27/2024

We shrunk our Javascript monorepo git size(www.jonathancreamer.com)
334 points | 213 comments
tux3 10/27/2024|
For those wondering where this new git-survey command is, it's actually not in git.git yet!

The author is using microsoft's git fork, they've added this new command just this summer: https://github.com/microsoft/git/pull/667

masklinn 10/27/2024||
I assume full-name-hash and path-walk are also only in the fork as well (or in git HEAD)? Can't see them in the man pages, or in the 2.47 changelog.
tux3 10/27/2024||
Yep. Path-walk is currently pending review here: https://lore.kernel.org/all/pull.1813.git.1728396723.gitgitg...

It more or less replaces the --full-name-hash option (again a very good cover letter that explains the differences and pros/cons of each very well!)

clktmr 10/27/2024||
[flagged]
PoignardAzur 10/27/2024|||
Oh for crying out loud.

"EEE" isn't a magic incantation, it's the name of an actual policy with actual tangible steps that their executives were implementing back when the CEO thought open source was the greatest threat to their business model.

Microsoft contributing to a project doesn't automatically make it EEE. For one thing, EEE was about adopting open standards in proprietary software. Microsoft during EEE didn't publish GPL code like this is.

haolez 10/27/2024||
Well, most of their extensions to VSCode are proprietary. When their dominance in software development becomes irreversible, it's obvious that they will close things down and create new sources of income. The incentives are clear.
mb7733 10/27/2024|||
VSCode is _their_ product. It doesn't make sense to say that they are EEEing their own product. EEE is when you take some existing open standard, support it in proprietary a product, and then extend it in proprietary ways, thereby taking over the standard. It doesn't apply for a product that you originally created.
jasonm23 10/28/2024||
.... Fork not an original creation

that's how effective eee is, you don't know (or likely care) where MS ripped all this code.

This is not an accident. It's the point.

mb7733 10/29/2024||
Are you saying that VSCode is a fork of a non-Microsoft product? Which one?
thiht 10/27/2024||||
So what? You can use VSCodium and the OpenVSX marketplace if you like, no one is stopping you. It DOES mean you won’t be able to use some extensions that are published exclusively on the VSCode marketplace but guess what? You’re not entitled to every extension being accessible from all the stores, and you’re even less entitled to demand that all extensions are open source.

If Microsoft want to develop some proprietary extensions for VSCode it’s fine, everyone has this right. It has nothing to do with EEE.

madeofpalk 10/27/2024|||
What has VS Code got to do with any of this?
atombender 10/27/2024||||
Why? You do realize their fork is open source?

The fix described in this post have been submitted as a patch to the official Git project. The fix is improving a legitimate inefficiency in Git, and does nothing towards "embracing", "extending", or "extinguishing" anything.

clktmr 10/27/2024|||
Can you imagine their fork extending git with a feature which is incompatible to mainline git and then forcing user's to switch to their fork via github? I can, and it will give them the power to extinguish mainline git and force everything they want on their users (telemetry, licence agreements, online registration...). That might be the reason they're embracing git right now. The fork being open source doesn't help at all.

I'm not saying this shouldn't be merged, but I think people should be aware and see the early signs.

arp242 10/28/2024|||
There is no fork. It's some new stuff they're working on and have sent patches to upstream git for (and will presumably get merged in due time – or at least, it's certainly written with the intent to get merged upstream).

https://lore.kernel.org/git/7d43a1634bbe2d2efa96a806e3de1f1f...

atombender 10/27/2024|||
Sure, I can imagine. But this isn't what's happening.
dijit 10/27/2024||||
you are in the early extend phase.

it will look good, until the extensions get more and more proprietary- but absurdly useful.

acdha 10/27/2024||
The extend phase starts when they make extensions which only work in their proprietary version. Putting extensive work into contributing them back is not the same.
dijit 10/27/2024||
Ok. There are a dozen examples of exactly this behaviour, and exactly this argumentation in response over the years.

Right now the most important thing for them is for people to start thinking the microsoft fork is the superior one, even if things are “backported”.

acdha 10/27/2024|||
I note the conspicuous lack of examples, and it’s irrelevant in this case where they are working to get the changes merged upstream exactly the way, say, Red Hat might have something they work on for a while before it merges upstream.

VS Code is the most common example people have, but it’s not the same: that’s always been their project so while I don’t love how some things like Pylance are not open it’s not like those were ever promised as such and the core product runs like a normal corporate open source project. It’s not like they formed Emacs and started breaking compatibility to prevent people from switching back and forth. One key conceptual difference is that code is inherently portable, unlike office documents 30 years ago – if VSC started charging, you could switch to any of dozens of other editors in seconds with no changes to your source code.

I would recommend thinking about your comment in the context of the boy who cried wolf. If people trot out EEE every time Microsoft does something in open source, it’s just lowering their credibility among anyone who didn’t say Micro$oft in the 90s and we’ll feel that loss when there’s a real problem.

dijit 10/27/2024||
Ok, examples:

* SMTP

* Kerberos (there was a time you could use KRB4 with Windows because AD is just krb4 with extensions: now you have to use AD).

* HTML (activex etc)

* CALDAV // CARDDAV

* Javas portability breakage

* MSN and AOL compatibility.

“oh, but its not the same”. It never is, which is why I didnt want to give examples and preferred you speak to someone who knows the history more than a tiny internet comment that is unable to convey proper context.

wbl 10/27/2024|||
You understand in these cases the issue was not contributing back right?
szundi 10/27/2024|||
Part of the game
dijit 10/27/2024|||
Yeah… that was the issue.
nixosbestos 10/27/2024||||
No examples offered. And zero that I know of with respect to Git. This is how all open source development is done with big features - iterated on in a fork and proposed and merged in.

There are so many good things to criticize Microsoft for. When this is what people come with, it serves as a single of emotion-based ignorance and to ignore.

Dylan16807 10/29/2024||||
If you cry foul every time microsoft Embraces something, you'll be proven right about EEE a lot of the time.

But you'll also be wrong a lot of the time.

This is not the Extend in EEE. We might get there, and we should be generally wary of microsoft, but this doesn't show that we're already there.

atombender 10/27/2024|||
Which examples?
dijit 10/27/2024||
VSCode is a prominent one that is in everyones mind, its starting its journey into extinguish.

For more examples I would consult your local greybeard; since the pattern is broad enough that you can reasonably argue that “this time, its different” which is also what you hear every single time it happens.

mb7733 10/27/2024||
What is being embraced, extended and extinguished by vscode?
dagw 10/27/2024||
A lot of new and popular features in VSCode are only available in the official MS version of VSCode. Using any of the forks of VSCode thus becomes a lesser experience.

Microsoft Embraced by making VSCode free and open source. Then they Extended by using their resources to make VSCode the go to open source IDE/Editor for most use cases and languages, killing much of the development momentum for none VSCode based alternatives. Now they're Extinguishing the competition by making it harder and harder to use the ostensibly open source VSCode codebase to build competing tools.

mb7733 10/27/2024|||
From the wikipedia definition EEE goes like this:

> Embrace: Development of software substantially compatible with an Open Standard.

> Extend: Addition of features not supported by the Open Standard, creating interoperability problems.

>Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors who are unable to support the new extensions.

As I see it, there no open standard that Microsoft is rendering proprietary through VSCode. VSCode is their own product.

I see your point that VSCode may have stalled development of other open source editors, and has proprietary extensions... but I don't think really EEE fits. It's just competition.

opticfluorine 10/27/2024|||
To add to this, there are also official Microsoft extensions to VSCode which add absurdly useful capabilities behind subtle paywalls. For example, the C# extension is actually governed by the Visual Studio license terms and requires a paid VS subscription if your organization does not qualify for Visual Studio Community Edition.

I'm not totally sold on embrace-extemd-extinguish here, but learning about this case was eyebrow raising for me.

neonsunset 10/27/2024||
C# extension is MIT, even though vsdbg it ships with is closed-source. There's a fork that replaces it with netcoredbg which is open.

C# DevKit is however based on VS license. It builds on top of base C# extension, the core features like debugger, language server, auto-complete and auto-fixers integration, etc. are in the base extension.

Alifatisk 10/27/2024|||
It’s open-source at start, later it turns into open-core.
atombender 10/27/2024||
Is this fix evidence of that?
Alifatisk 10/27/2024||
No pure speculation
atombender 10/27/2024||
Not relevant, then.
throwuxiytayq 10/27/2024||||
Can you elaborate how exactly git is at risk here? These posts never do.
coliveira 10/27/2024||
They will extend git so that it works extremely well with their proprietary products, and just average with other tools and operating systems. That's always the goal for MS.
wbl 10/27/2024||
You know who the maintainer of Git is right?
semiquaver 10/28/2024||
Junio Hamano. Or did you confuse git and GitHub?
szundi 10/27/2024||||
This comment is downvoted, however you can be sure that managers in these corporations make these decisions deliberately - like half the time.

I find these insightful reminders. Use the vanilla free versions if the difference is negligeble.

maccard 10/27/2024|||
No, this is cathedral vs bazaar development
yunusabd 10/27/2024||
> For many reasons, that's just too big, we have folks in Europe that can't even clone the repo due to it's size.

What's up with folks in Europe that they can't clone a big repo, but others can? Also it sounds like they still won't be able to clone, until the change is implemented on the server side?

> This meant we were in many occasions just pushing the entire file again and again, which could be 10s of MBs per file in some cases, and you can imagine in a repo

The sentence seems to be cut off.

Also, the gifs are incredibly distracting while trying to read the article, and they are there even in reader mode.

anon-3988 10/27/2024||
> For many reasons, that's just too big, we have folks in Europe that can't even clone the repo due to it's size.

I read that as an anecdote, a more complete sentence would be "We had a story where someone from Europe couldn't clone the whole repo on his laptop for him to use on a journey across Europe because his disk is full at the time. He has since cleared up the disk and able to clone the repo".

I don't think it points to a larger issue with Europe not being able to handle 180GB files...I surely hope so.

peebeebee 10/27/2024||
The European Union doesn't like when a file get too big and powerful. It needs to be broken apart in order to give smaller files a chance of success.
wizzwizz4 10/27/2024|||
Ever since they enshrined the Unix Philosophy into law, it's been touch-and-go for monorepotic corporations.
_joel 10/27/2024|||
People foolishly thought the G in GDPR stood for "general" when it's actually GIANT.
acdha 10/27/2024|||
My guess is that “Europe” is being used as a proxy for “high latency, low bandwidth” – especially if the person in question uses a VPN (especially one of those terrible “SSL VPN” kludges). It’s still surprisingly common to encounter software with poor latency handling or servers with broken window scaling because most of the people who work on them are relatively close and have high bandwidth connection.
jerf 10/27/2024|||
And given the way of internal corporate networks, probably also "high failure rate", not because of "the internet", but the pile of corporate infrastructure needed for auditability, logging, security access control, intrusion detection, maxed out internal links... it's amazing any of this ever functions.
acdha 10/27/2024||
Or simply how those multiply latency - I’ve seen enterprise IT dudes try to say 300ms LAN latency is good because nobody wants to troubleshoot their twisted mess of network appliances and it’s not technically down if you’re not getting an error…

(Bonus game: count the number of annual zero days they’re exposed to because each of those vendors still ships 90s-style C code)

sroussey 10/27/2024||||
Or high packet loss.

Every once in a while, my router used to go crazy with seemingly packet loss (I think a memory issue).

Normal websites would become super slow for any pc or phone in the house.

But git… git would fail to clone anything not really small.

My fix was to unplug the modem and router and plug back in. :)

It took a long time to discover the router was reporting packet loss, and that the slowness the browsers were experiencing has to do with some retries, and that git just crapped out.

Eventually when git started misbehaving I restarted the router to fix.

And now I have a new router. :)

hinkley 10/27/2024|||
Sounds, based on other responders, like high latency high bandwidth, which is a problem many of us have trouble wrapping our heads around. Maybe complicated by packet loss.

After COVID I had to set up a compressing proxy for Artifactory and file a bug with JFrog about it because some of my coworkers with packet loss were getting request timeouts that npm didn’t handle well at all. Npm of that era didn’t bother to check bytes received versus content-length and then would cache the wrong answer. One of my many, many complaints about what total garbage npm was prior to ~8 when the refactoring work first started paying dividends.

benkaiser 10/28/2024|||
I can actually weigh in here. Working from Australia for another team inside Microsoft with a large monorepo on Azure devops. I pretty much cannot do a full (unshallow) clone of our repo because Azure devops cloning gets nowhere close to saturating my gigabit wired connection, and eventually due to the sheer time it takes cloning something will hang up on either my end of the Azure devops end to the point I would just give up.

Thankfully, we do our work almost entirely in shallow clones inside codespaces so it's not a big deal. I hope the problems presented in the 1JS repro from this blog post are causing similar size blowout in our repo and can be fixed.

thrance 10/27/2024|||
The repo is probably hosted on the west coast, meaning it has to cross the Atlantic whenever you clone it from Europe?
tazjin 10/27/2024||
> What's up with folks in Europe that they can't clone a big repo, but others can?

They might be in a country with underdeveloped internet infrastructure, e.g. Germany))

avianlyric 10/27/2024||
I do t think there’s any country in Europe with internet infrastructure as underdeveloped as the US. Most of Europe has fibre-to-the-premise, and all of Europe has consumer internet packages that are faster and cheaper than you’re gonna find anywhere in the U.S.
tazjin 10/27/2024||
There's (almost) no FTTH in Germany. The US used to be as bad as Germany, but it has improved significantly and is actually pretty decent these days (though connection speed is unevenly distributed).

Both countries are behind e.g. Sweden or Russia, but Germany by a much larger margin.

There's some trickery done in official statistics (e.g. by factoring in private connections that are unavailable to consumers) to make this seem better than it is, but ask anyone who lives there and you'll be surprised.

rurban 10/28/2024||
The east has fibre everywhere, but the west is still a developing country(side). Shipping code on a truck would be faster, if you are not on some academic fibre net
eviks 10/27/2024||
upd: silly mistake - file name does not include its full path

The explanation probably got lost among all the gifs, but the last 16 chars here are different:

> was actually only checking the last 16 characters of a filename > For example, if you changed repo/packages/foo/CHANGELOG.md, when git was getting ready to do the push, it was generating a diff against repo/packages/bar/CHANGELOG.md!

tux3 10/27/2024||
Derrick provides a better explanation in this cover letter: https://lore.kernel.org/git/pull.1785.git.1725890210.gitgitg...

(See also the path-walk API cover letter: https://lore.kernel.org/all/pull.1786.git.1725935335.gitgitg...)

The example in the blog post isn't super clear, but Git was essentially taking all the versions of all the files in the repo, putting the last 16 bytes of the path (not filename) in a hash table, and using that to group what they expected to be different versions of the same file together for delta compression.

Indeed in the blog it doesn't work, because foo/CHANGELOG.md and bar/CHANGELOG.md is only 13 chars, but you have to imagine the paths have a longer common suffix. That part is fixed by the --full-name-hash option, now you compare the full path instead of just 16 bytes.

Then they talk about increasing the window size. That's kind of a hack to workaround bad file grouping, but it's not the real fix. You're still giving terrible inputs to the compressor and working around it by consuming huge amounts of memory. So that was a bit confusing to present it as the solution. The path walk API and/or --full-name-hash are the real interesting parts here =)

lastdong 10/27/2024||
Thank you! I ended up having to look at the PR to make any sense of the blog post, but your explanation and links makes things much clearer
jonathancreamer 10/28/2024||
I'll update the post with this clarity too. Thanks!
derriz 10/27/2024|||
I wish they had provided an actual explanation of what exactly was happening and skipped all the “color” in the story. By filename do they mean path? Or is it that git will just pick any file with a matching name to generate a diff? Is there any pattern to the choice of other file to use?
snthpy 10/27/2024||
+1
js2 10/27/2024|||
> file name does not include its full path

No, it is the full path that's considered. Look at the commit message on the first commit in the `--full-name-hash` PR:

https://github.com/git-for-windows/git/pull/5157/commits/d5c...

Excerpt: "/CHANGELOG.json" is 15 characters, and is created by the beachball [1] tool. Only the final character of the parent directory can differntiate different versions of this file, but also only the two most-significant digits. If that character is a letter, then this is always a collision. Similar issues occur with the similar "/CHANGELOG.md" path, though there is more opportunity for differences in the parent directory.

The grouping algorithm puts less weight on each character the further it is from the right-side of the name:

  hash = (hash >> 2) + (c << 24)
Hash is 32-bits. Each 8-bit char (from the full path) in turn is added to the 8-most significant bits of hash, after shifting any previous hash bits to the right by two bits (which is why only the final 16 chars affect the final hash). Look at what happens in practice:

https://go.dev/play/p/JQpdUGXdQs7

Here I've translated it to Go and compared the final value of "aaa/CHANGELOG.md" to "zzz/CHANGELOG.md". Plug in various values for "aaa" and "zzz" and see how little they influence the final value.

rurban 10/28/2024|||
Sounds like it needs to be fixed to FNV1a
js2 10/29/2024|||
No, the problem isn't the hash. It does what it was designed to do. It's just that it was optimal for a particular use case that fits the Linux kernel better than Microsoft's use case. Switching the hash wouldn't improve either situation. If you want to understand this deeper, see the linked PRs.
eviks 10/28/2024|||
Thanks for the deep dive!
daenney 10/27/2024|||
File name doesn’t necessarily include the whole path. The last 16 characters of CHANGELOG.md is the full file name.

If we interpret it that way, that also explains why the filepathwalk solution solves the problem.

But if it’s really based on the last 16 characters of just the file name, not the whole path, then it feels like this problem should be a lot more common. At least in monorepos.

floam 10/27/2024|||
It did shrink Chromium’s repo quite a bit!
eviks 10/27/2024|||
yes, this makes sense, thanks for pointing it out, silly confusion on my part
p4bl0 10/27/2024|||
I was also bugged by that. I imagine that the meta variables foo and bar are at fault here, and that probably the actual package names had a common suffix like firstPkg and secondPkg. A common suffix of length three is enough in this case to get 16 chars in common as "/CHANGELOG.md" is already 13 chars long.
jonathancreamer 10/28/2024||
Sorry about the gifs. Haha. And yeah I guess my understanding wasn't quite right either reading the reply to this thread, I'll try to clean it up in the post.
tazjin 10/27/2024||
I just tried this on nixpkgs (~5GB when cloned straight from Github).

The first option mentioned in the post (--window 250) reduced the size to 1.7GB. The new --path-walk option from the Microsoft git fork was less effective, resulting in 1.9GB total size.

Both of these are less than half of the initial size. Would be great if there was a way to get Github to run these, and even greater if people started hosting stuff in a way that gives them control over this ...

jakub_g 10/27/2024||
The article mentions Derick Stolee who dig the digging and shipped the necessary changes. If you're interested in git internals, shrinking git clone sizes locally and in CI etc, Derrick wrote some amazing blogs on GitHub blog:

https://github.blog/author/dstolee/

See also his website:

https://stolee.dev/

Kudos to Derrick, I learnt so much from those!

fragmede 10/27/2024||
> Large blobs happens when someone accidentally checks in some binary, so, not much you can do

> Retroactively, once the file is there though, it's semi stuck in history.

Arguably, the fix for that is to run filter-branch, remove the offending binary, teach and get everyone setup to use git-lfs for binaries, force push, and help everyone get their workstation to a good place.

Far from ideal, but better than having a large not-even-used file in git.

abound 10/27/2024||
There's also BFG (https://rtyley.github.io/bfg-repo-cleaner/) for people like me who are scared of filter-branch.

As someone else noted, this is about small, frequently changing files, so you could remove old versions from the history to save space, and use LFS going forward.

larusso 10/27/2024|||
The main issue is not a binary file that never changes. It’s the small binary file that changes often.
cocok 10/27/2024|||
filter-repo is the recommended way these days:

https://github.com/newren/git-filter-repo

lastdong 10/27/2024||
It’s easier to blame Linus.
develatio 10/27/2024||
Hacking Git sounds fun, but isn't there a way to just not have 2.500 packages in a monorepo?
hinkley 10/27/2024||
Code line count tends to grow exponentially. The bigger the code base, the more unreasonable it is to expect people not to reinvent an existing wheel, due to ignorance of the code or fear of breaking what exists by altering it to handle your use case (ignorance of the uses of the code).

IME it takes less time to go from 100 modules to 200 than it takes to go from 50 to 100.

Cthulhu_ 10/27/2024|||
Yeah, have 2500 separate Git repos with all the associated overhead.
develatio 10/27/2024|||
Can’t we split the packages into logical groups and maybe have 20 or 30 monorepos of 70-100 packages? I doubt that all the devs involved in that monorepo have to deal with all the 2500 packages. And I doubt that there is a circular dependency that requires all of these packages to be managed in a single monorepo.
smashedtoatoms 10/27/2024||
People act like managing lots of git repos is hard, then run into monorepo problems requiring them to fix esoteric bugs in C that have been in git for a decade, all while still arguing monorepos are easy and great and managing multiple repos is complicated and hard.

It's like hammering a nail through your hand, and then buying a different hammer with a softer handle to make it hurt less.

crazygringo 10/27/2024|||
> all while still arguing monorepos are easy and great

I don't know anyone who says monorepos are easy.

To the contrary, the tooling is precisely the hard part.

But the point is that the difficulty of the tooling is a lot less than the difficulty of managing compatibility conflicts between tons of separate repos.

Each esoteric bug in C only needs to be fixed once. Whereas your version compatibility conflict this week is going to be followed by another one next week.

wavemode 10/27/2024|||
At Amazon, there is no monorepo.

And the tooling to handle this is not even particularly conceptually complicated - a "versionset" is a set of versions - a set of pointers to a particular commit of a repository. When you build and deploy an application, what you're building is a versionset containing the correct versions of all its dependencies. And pull requests can span across multiple repositories.

Working at Amazon had its annoyances, but dependency management across repos was not one of them.

spankalee 10/28/2024|||
> And pull requests can span across multiple repositories

This bit is doing a lot of work here.

How do you make commits atomic? Is there a central commit queue? Do you run the tests of every dependent repo? How do you track cross-repo dependencies to do that? Is there a central database? How do you manage rollbacks?

HdS84 10/27/2024|||
Thad exactly the problem. At least tooling can solve mono repo problems. But commits , which should span multiple repos, have no tooling at all. Except pain. Lots of pain.
Vilian 10/28/2024|||
Don't forget that git was made for Linux and Linux isn't a monorepo and works great with tens of thousands of devs per release
arp242 10/28/2024||
> Linux isn't a monorepo

I assume you meant to write "is" there?

hinkley 10/27/2024|||
Changing 100 CI pipelines is a giant pain in the ass. The third time I split the work with two other people. The 4th time someone wrote a tool and switched to a config file in the repo. 2500 is nuts. How do you even track red builds?
lopkeny12ko 10/27/2024||
This was exactly my first thought as well. This seems like an entirely self-manufactured problem.
hinkley 10/27/2024||
When you have hundreds of developers you’re going to get millions of lines of code. Thats partly Parkinson’s Law but also we have not fully perfected the three way merge, encouraging devs spread out more than intrinsically necessary in order to avoid tripping over each other.

If you really dig down into why we code the way we do, the “best practices” in software development, about half of them are heavily influenced by merge conflict, if not the primary cause.

If I group like functions together in a large file, then I (probably) won’t conflict with another person doing an unrelated ticket that touches the same file. But if we both add new functions at the bottom of the file, we’ll conflict. As long as one of us does the right thing everything is fine.

oftenwrong 10/31/2024||
This is one of the interesting benefits of https://www.unison-lang.org/ . A codebase of immutable functions inherently cannot have merge conflicts.
snthpy 10/27/2024||
Thanks for this post. Really interesting and a great win for OSS!

I've been watching all the recent GitMerge talks put up by GitButler and following the monorepo / scaling developments - lots of great things being put out there by Microsoft, Github, and Gitlab.

I'd like to understand this last 16 char vs full path check issue better. How does this fit in with delta compression, pack indexes, multi-pack indexes etc ... ?

_joel 10/27/2024|
> Really interesting and a great win for OSS!

Are they going to be opening a merge request to get their custom git command back in git proper then?

acdha 10/27/2024||
It appears so: https://lore.kernel.org/git/pull.1785.git.1725890210.gitgitg...
wodenokoto 10/27/2024||
Nice to see that Microsoft is dog-fooding Azure DevOps. It seems that more and more Azure services only have native connectors to GitHub so I actually thought it was moving towards abandonware.
issung 10/27/2024|
Having someone in arms reach to help out that knows the inner workings of Git so much must be a lovely perk of working on such projects at companies of this scale.
jonathanlydall 10/27/2024|
Certainly being in an org which has close ties to entities like GitHub helps, but any team in any org with that number of developers can justify the cost of bringing in a highly specialized consultant to solve an almost niche problem like this.
More comments...