Top
Best
New

Posted by kwantaz 10/27/2024

We shrunk our Javascript monorepo git size(www.jonathancreamer.com)
334 points | 213 commentspage 2
nkmnz 10/27/2024|
> we have folks in Europe that can't even clone the repo due to it's size

Officer, I'd like to report a murder committed in a side note!

dizhn 10/27/2024||
They call him Linux Torvalds over there?
bubblesnort 10/27/2024||

    > We work in a very large Javascript monorepo at Microsoft we colloquially call 1JS.
I used to call it office.com.. Teams is the worst offender there. Even a website with a cryptominer on it runs faster than that junk.
wodenokoto 10/27/2024||
We were all impressed with google docs, but office.com is way more impressive.

Collaborative editing between a web app, two mobile anpps and a desktop app with 30 years of backwards compatibility and it pretty much just works. No wonder that took a lot of JavaScript!

esperent 10/27/2024|||
We use MS Teams at my company. The Word and Excel in the Windows Teams app are so buggy that I can almost never successfully open a file. It just times out and eventually shows a "please try again later" message nearly every time. I've uninstalled and reinstalled the Teams app four or five times trying to fix this.

We've totally given up any kind of collaborative document editing because it's too frustrating, or we use Notion instead, which for all it's fault, at least the basic stuff like loading a bloody file works...

acdha 10/27/2024||
This is specific to your company’s configuration - likely something related to EDR or firewall policies.
esperent 10/27/2024|||
I'm the one who set it up. It's a small team of 20 people. I've done basically no setup beyond the minimum of following docs to get things running. We've had nonstop problems like this since the very start. Files don't upload, anytime I try to fix it I'm confronted with confusing error messages and cryptic things like people telling me "something related to EDR". What the hell is EDR? I just want to view a Word doc.

I've come to realize that Teams should only be used in large companies who can afford dedicated staff to manage it. But it was certainly sold to us as being easy to use and suitable for a small company.

acdha 10/27/2024||
EDR: https://en.wikipedia.org/wiki/Endpoint_detection_and_respons...

I mentioned that because security software blocking things locally or at the network level is such a common source of friction. I don’t think Teams is perfect by any means but the core functionality has been quite stable in personal use, both of my wife’s schools, and my professional use so I wouldn’t conclude that it’s hopeless and always like that.

esperent 10/28/2024||
Thank you, I appreciate the support. But this doesn't explain the intermittent nature of the issues. For example, just now I tried to open a word file. I got the error message. But then I tried several times and restarted the app twice, and eventually the file did load. It just took five+ minutes of trying over and over.

I also had to add a new user yesterday, so I went to admin.microsoft.com in Edge. 403 error. Tried Chrome and Firefox. Same. Went back to Edge and suddenly it loaded. The like an idiot I refreshed, 403 error again. Another 5 or six refreshes and it finally loaded again and I was able to add the new user. There's never any real error messages that would help me debug anything, it's just endless frustration and slowness.

bubblesnort 10/27/2024|||
Really it's anyone using teams on older or cheaper hardware.
acdha 10/27/2024||
So you’ve tested this with clean installs on unfiltered networks? Just how old is your hardware? It works well on, say, the devices they issue students here so I’m guessing it’d have to be extremely old.
matrss 10/27/2024||||
> [...] and it pretty much just works.

I beg to differ. Last time I had to use PowerPoint (granted, that was ~3 years ago), math on the slides broke when you touched it with a client that wasn't of the same type as the one that initially put it there. So you would need to use either the web app or the desktop app to edit it, but you couldn't switch between them. Since we were working on the slides with multiple people you also never knew what you had to use if someone else wrote that part initially.

hu3 10/27/2024||
could it be a font issue?
matrss 10/27/2024||
If I remember correctly I had created the math parts with the windows PowerPoint app and it was shown more or less correctly in the web app, until I double clicked on it and it completely broke; something like it being a singular element that wasn't editable at all when it should have been a longer expression, I don't remember the details. But I am pretty sure it wasn't just a font issue.
ezst 10/27/2024||||
That's the thing, though, the compat story is terrible. I can't say much about the backwards one, but Microsoft has started the process of removing features from the native versions just to lower the bar for the web one catching up. Even my most Microsoft-enamoured colleagues are getting annoyed by this (and the state of all-MS things going downhill, but that's another story)
lostlogin 10/27/2024||
> That's the thing, though, the compat story is terrible.

It really is. With shared documents you just have to give up. If someone edits them on the web, in Teams, in the actual app or some other way like on iOS, it all goes to hell.

Pages get added or removed, images jump about, fonts change and various other horrors occur.

If you care, you’ll get ground into the earth.

tinco 10/27/2024||||
To be fair, we were impressed with Google Docs 15 years ago. Not saying office.com isn't impressive, but Google Docs certainly isn't impressive today. My company still uses GSuite, as I don't like being in Microsoft's ecosystem and we don't need any advanced features of our office suite but Google Docs and the rest of the GSuite seem to be intentionally held back to technology of the early 2010's.
alexanderchr 10/27/2024||
Google docs certainly haven't changed much the last 5-10 years. I wonder if that's an intentional choice, or if it is because those that built it and understand how it works are long gone to work on other things.
jakub_g 10/27/2024|||
Actually I did see a few long awaited improvements landing in gdocs lately (e.g. better markdown support, pageless mode).

I think they didn't deliver much new features in early 2020s because they were busy with a big refactoring from DOM to canvas rendering [0].

[0] https://news.ycombinator.com/item?id=27129858

sexy_seedbox 10/27/2024|||
No more development? Time for Google to kill Google Docs!
fulafel 10/27/2024||||
What's impressive is that MS has such well trained customers that it can get away with extremely buggy and broken web apps. Fundamental brokenness like collaborative editing frequently losing data and thousand cuts of the more mundane bugs.
coliveira 10/27/2024||||
You must be kidding about "just works". There are so many bugs in word and excel that you could spend the rest of your life fixing. And the performance is disastrous.
Cthulhu_ 10/27/2024|||
> No wonder that took a lot of JavaScript!

To the point where they quickly found the flaws in JS for large codebases and came up with Typescript. I think. It makes sense that TS came out of the office for web project.

inglor 10/27/2024|||
Hey, I worked with Jonathan on 1JS a while ago (on a team, Excel).

Just a note OMR (the office monorepo) is a different (and actually much larger) monorepo than 1JS (which is big on its own)

To be fair I suspect a lot of the bloat in both originates from the amount of home grown tooling.

IshKebab 10/27/2024||
I thought Microsoft had one monorepo. Isn't that kind of the point? How many do they have?
lbriner 10/27/2024||
The point of a monorepo is that all the dependencies for a suite of related products are all in a single repo, not that everything your company produces is in a single repo.
cjpearson 10/27/2024||
Most people use the "suite of related products" definition of monorepo, but some companies like Google and Meta have a single company-wide repository. It's unfortunate that the two distinct strategies have the same name.
coliveira 10/27/2024||
Teams is the running version of that repository... It is hard for them even to store on git.
triyambakam 10/27/2024||
> we have folks in Europe that can't even clone the repo due to it's size.

What is it about Europe that makes it more difficult? That internet in Europe isn't as good? Actually, I have heard that some primary schools in Europe lack internet. My grandson's elementary school in rural California (population <10k) had internet as far back as 1998.

_kidlike 10/27/2024||
Let's pretend you didn't write the last 2 sentences...

first of all "internet in Europe" makes close to zero sense to argue about. The article just uses it as a shortcut to not start listing countries.

I live in a country where I have 10Gbps full-duplex and I pay 50$ / month, in "Europe".

The issue is that some countries have telecom lobbies which are still milking their copper networks. Then the "competition committees" in most of these countries are actually working AGAINST the benefit of the public, because they don't allow 1 single company to start offering fiber, because that would be a competition advantage. So the whole system is kinda in a deadlock. In order to unblock, at least 2 telecoms have to agree to release fiber deals together. It has happened in some countries.

0points 10/27/2024||
What european countries still dont have fiber?

//Confused swede with 10G fiber all over the place. Writing from literally the countryside next to nowhere.

zelphirkalt 10/27/2024|||
If you really need it pointed out, take it from a German neighbor: Telekom is running some extortion scheme or so here. Oh we could have gotten fiber to our house already ... if we paid them 800+ Euro! So we rather stick with our 100MBits or so connection that is not fiber but copper. If the German state does not intervene here, or the practices of ISPs and whoever has the power to build fiber changes, we will for the foreseeable future still be on copper.

Then there are villages, which were promised fiber connections, but somehow after switching to the fiber connection made them have unstable Internet and ofter no Internet. Saw some documentary about that, could be fixed by now.

Putting fiber into the ground also requires a whole lot of effort opening up roads and replacing what's there. Those costs they try to push to the consumers with their 800+ Euro extortion scheme.

But to be honest, I am also OK with my current connection. All I worry about is it being stable, no package loss, and no ping spikes. A consistently good connection stability is more important than throughout. Sadly, I cannot buy any of those guarantees from any ISP.

0points 10/29/2024|||
FWIW, Sweden subsidized fiber digging but we still had to pay 2000 EUR to get it connected.

Government will pay the extra fees, which can easily end up close to 10000 EUR due to large distances.

If all you need to pay is 800 EUR, then I don't understand what is your issue? Just pay it.

singron 10/27/2024|||
Is 800 euros that bad? In the US, we were quoted $10k a few years back. Even if fiber is already at the road, $800 is probably a fair price just to trench the line from the road to your home and install an entry point. If they provide free installation, then they have to make up the cost by raising your rates.
zelphirkalt 10/27/2024||
I think private households paying 800 Euro for what should be public infrastructure, being milked by ISPs is pretty bad.
holowoodman 10/27/2024||||
Germany.

Deutsche Telekom is the former monopoly that was half-privatized around 1995 or something. The state still owns quite a large stake of it.

They milk their ancient copper crap for everything they can while keeping prices high.

They are refusing useful backbone interconnects to monopolize access to their customers (Actually they are not allowed to refuse. They just offer interconnections only in their data centers in the middle of nowhere, where you need to rent their (outrageously priced) rackspace and fibres because there is nothing else. They are refusing for decades to do anything useful at the big exchanges like DECIX).

And if there should ever be a small competitor that on their own tries to lay fibre somewhere, they quickly lay their own fibre into the open ditches (they are allowed to do that) and offer just enough rebates for their former copper customers to switch to their fibre that the competitor cannot recoup the invest and goes bankrupt. Since that dance is now known to everyone, even the announcement of Telekom laying their own fibres kills the competitors' projects there. So after a competitor's announcement of fibre rollout, Telekom does the same, project dead, no fibre rollout at all.

Oh, and since it is a partially-state-owned former monopoly/ministry, the state and competition authorities turn a blind eye to all that, when not actively promoting them...

Then there is the problem of "5G reception" vs. "5G reception with usable bandwidth". A lot of overbooking goes on, many cells don't have sufficient capacity allocated, so there are reports of 4G actually being faster in many places.

And also, yes, you can get 5G in a lot of actually populated areas. But you certainly will pay through the nose for that, usually you get a low-GB amount of traffic included, so maybe a tenth of the Microsoft monorepo in question. The rest is pay-10Eur-per-GB or something.

ahartmetz 10/27/2024|||
It is almost as bad as you say, except that I recently noticed several instances of competitors offering cheaper fiber than Telekom and surviving. Still, overall fiber buildout is low, like... I looked it up, reportedly 36% now.
immibis 10/27/2024|||
Wait, I live in that area. Does that mean I'm allowed to lay my own fiber into their open ditches too, or do they have special rights no one else has?
holowoodman 10/27/2024||
Afaik the special right is granted to everyone providing fibre services to the public to be informed about any ditches on public ground being dug and getting the opportunity to throw their fibre in before the ditch is closed again.
SSLy 10/27/2024||||
Germany, GP's situation smells like their policies.
ahoka 10/27/2024|||
I pay 42USD for 250Mbit in a larger Swedish city. What is that magic ISP I should be using?
0points 10/30/2024|||
Change landlord. I used to pay about 100 SEK for bahnhof in svenska bostäder before I moved away. It came with public IP and everything.
BenjiWiebe 10/28/2024|||
Sounds like you are already using a magic ISP (rural USA here).
yashap 10/27/2024|||
They’re probably downloading from a server in the states, being much further away makes a big difference with a massive download.
jonathancreamer 10/28/2024||
This.
nyanpasu64 10/27/2024|||
I've experienced interruptions mid-clone (with no apparent way to resume them) when trying to clone repos on unreliable connections, and perhaps a similar issue is happening with connections between continents.
joshvm 10/27/2024||
The only reliable route I’ve found is to use SSH clone. HTTPS is lousy and as you mention, is not resumable. Works fine in Antarctica even over our slower satellite. Doesn’t help if you actually drop, but you can clone to a remote and then rsync everything over time.
p_l 10/27/2024|||
It's issues cloning super huge repo over crappy protocols across ocean especially when VPNs get included in the problem
59nadir 10/27/2024|||
Most european countries have connections with more bandwith and less base latency for cheaper than the US, it's not a connection issue. If there was an issue it's that the repo itself is hosted on the other side of the world, but even so the sidenote itself is odd.
tom_ 10/27/2024||
I wouldn't say it's odd at all - it's basically what's justifying actually trying to solve the problem rather than just going "huh... that's weird..." then putting it on the backlog due to it not being a showstopper.

This sort of thing has been a problem on every project I've worked on that's involved people in America. (I'm in the UK.) Throughput is inconsistent, latency is inconsistent, and long-running downloads aren't reliable. Perhaps I'm over-simplifying, but I always figured the problem was fairly obvious: it's a lot of miles from America to Europe, west coast America especially, and a lot of them are underwater, and your're sharing the conduit with everybody else in Europe. Many ways for packets to get lost (or get held up long enough to count), and frankly it's quite surprising more of them don't.

(Usual thing for Perforce is to leave it running overnight/weekend with a retry count of 1 million. I'm not sure what you'd do with Git, though? it seems to do the whole transfer as one big non-retryable lump. There must be something though.)

gnrlst 10/27/2024|||
In most EU countries we have multi-gigabit internet (for cheap too). Current offers are around ~5 GBIT speeds for 20 bucks a month.
jillesvangurp 10/27/2024|||
Sadly, I'm in Germany. Which is a third world country when it comes to decent connectivity. They are rolling out some fiber now in Berlin. Finally. But very slowly and not to my building any time soon. Most of the country is limited to DSL speeds. Mobile coverage is getting better but still non existent outside of cities. Germany has borders with nine countries. Each of those have better connectivity than Germany.

I'm from the Netherlands where over 90% of households now have fiber connections, for example. Here in Berlin it's very hard to get that. They are starting to roll it out in some areas but it's taking very long and each building has to then get connected, which is up to the building owners.

aniviacat 10/27/2024|||
> Mobile coverage is getting better but still non existent outside of cities.

According to the Bundesnetzagentur over 90% [1] of Germany has 5G coverage (and almost all of the rest has 4G [2]).

[1] https://www.bundesnetzagentur.de/SharedDocs/Pressemitteilung...

[2] https://gigabitgrundbuch.bund.de/GIGA/DE/MobilfunkMonitoring...

holowoodman 10/27/2024|||
Those statistics are a half-truth at best.

The "coverage" they are reporting is not by area but by population. So all the villages and fields that the train or autobahn goes by won't have 5G, because they are in the other 10% because of their very low population density.

And the reporting comes out of the mobile phone operators' reports and simulations (they don't have to do actual measurements). Since their license depends on meeting a coverage goal, massive over-reporting is rampant. The biggest provider (Deutsche Telekom) is also partially state-owned, so the regulators don't look as closely...

Edit: accidentially posted this in the wrong comment: Then there is the problem of "5G reception" vs. "5G reception with usable bandwidth". A lot of overbooking goes on, many cells don't have sufficient capacity allocated, so there are reports of 4G actually being faster in many places.

And also, yes, you can get 5G in a lot of actually populated areas. But you certainly will pay through the nose for that, usually you get a low-GB amount of traffic included, so maybe a tenth of the Microsoft monorepo in question. The rest is pay-10Eur-per-GB or something.

jillesvangurp 10/27/2024|||
I usually lose connectivity on train journeys across Germany. I'm offline most of the way. Even the in train wifi gets quite bad in remote areas. Because they depend on the same shitty mobile networks. There's a stark difference as soon as you cross the borders with other countries. Suddenly stuff works again. Things stop timing out.

I also deal with commercial customers that have companies in areas with either no or poor mobile connectivity and since we sell mobile apps to them, we always need to double check they actually have a good connection. One of our customers is on the edge of a city with very spotty 4G at best. I recently recommended Star Link to another company that is operating in rural areas. They were asking about offline capabilities of our app. Because they deal with poor connectivity all the time. I made the point that you can get internet anywhere you want now for a fairly reasonable price.

barrkel 10/27/2024|||
When I travel in Germany I use a Deutsche Telekom pay as you go SIM in a 5G hotspot, and generally get about 200Mbit throughtput, which is far higher than you can expect any place you're staying to provide. It's €7 a day (or €100 a month) but it's worth it to avoid the terrible internet.
zelphirkalt 10/27/2024||
Oh, that is an incentive for them not to improve anything. Wouldn't want customers to stop purchasing mobile Internet for 100 Euro a month.
n_ary 10/27/2024||||
Well good for you. On my side of europe, I pay €50/- for a cheap 50Mbps(1 month cancellation notice period). I could get a slightly cheaper 100Mbps from a predator for €20/- for first 6 month but then it goes up to €50/- and they pull bs about not being able to cancel if you even move because your new location is also in their coverage area(over garbage copper) and suffers at least 20 outages per month while there are other providers with much cheaper rates and better service.

Some EU is still suffering from Telekom copper barons.

badgersnake 10/27/2024|||
Not in the UK. Still on 80Mbit VDSL here.
_joel 10/27/2024|||
You must be unlucky, according to Openreach "fibre broadband is already available in more than 96.59 per cent of the UK."
mattlondon 10/27/2024|||
Is that "fibre" or "full fibre".

They lied a lot for a good few years saying "OMG fibre broadband!" When in reality is was still copper for the last mile so that "fibre" connection in reality was some ADSL variant and limited to 80/20mpbs.

Actual full fibre all the way from your home to the internet is I think still quite a way behind. Even in London (London! The capital city with high density) there are places where there are no full fibre options.

Deathmax 10/27/2024|||
According to ThinkBroadband's tracking [1], the headline figures are 85.20% of premises are gigabit capable (FTTP/FTTH/Cable [DOCSIS]) with 71.86% being full fibre.

[1]: https://www.thinkbroadband.com/news/10343-85-gigabit-coverag...

_joel 10/27/2024|||
Maybe myself and my friends are lucky as we're all on ftth
mattlondon 10/27/2024||
Only a few I know are on ftth. I guess I live in a fairly affluent area in Zone 3 which is lower density than average - zero flats etc, all just individual houses so perhaps not worth their effort rolling out
badgersnake 10/27/2024|||
Coming next year apparently. I won’t hold my breath.
sirsinsalot 10/27/2024|||
I and many I know have Gb fiber in the UK
RadiozRadioz 10/27/2024|||
At least here in Western Europe, in general the internet is great. Though coverage in rural areas varies by country.
johnisgood 10/27/2024|||
Some countries in Europe (even Poland) definitely offer faster Internet and for cheaper than the US, and without most of the privacy issues that US ISPs have.
mattlondon 10/27/2024|||
I was not sure what this meant either. I know personally I have downloaded and uploaded some very very large files transatlantic (e.g. syncing to cloud storage) with absolutely no issues, so not sure what they are talking about. I guess perhaps there are issues with git cloning such a large amount of data, but that is a problem with git and not the infrastructure.

FWIW every school I've seen (and I recently toured a bunch looking at them for my kids to start at) all had the internet and the kids were using iPads etc for various things.

Anecdotally my secondary school (11-18y in UK) in rural Hertfordshire was online in the 1995 region. It was via I think a 14.4 modem and there actually wasn't that much useful material for kids then to be honest. I remember looking at the "non-professional style" NASA website for instance (the current one is obviously quite fancy in comparison, but it used to be very rustic and at some obscure domain). CD-based encyclopedias we're all the rage instead around that time IIRC - Encarta et al.

heisenbit 10/27/2024|||
Effective bandwidth can be influenced by roundtrip time. Fewer IP4 numbers means more NAT with more delay and yet another point where occasionally something can go wrong. Last but not least there are some areas in the EU like the Canary Islands where the internet feels like going over a sat.
nemetroid 10/27/2024|||
The problem is probably that the repo is not hosted in Europe.
o11c 10/27/2024|||
My knowledge is a bit outdated, but we used to say:

* in America, peering between ISPs is great, but the last-mile connection is terrible

* In Europe, the last-mile connection is great, but peering between the ISPs is terrible (ISPs are at war with each other). Often you could massively improve performance by renting a VPS in the correct city and routing your traffic manually.

teo_zero 10/27/2024||
> > we have folks in Europe that can't even clone the repo due to it's size.

> I have heard that some primary schools in Europe lack internet.

Maybe they lack internet but teach their pupils how to write "its".

rettichschnidi 10/27/2024||
I'm surprised they are actually using Azure DevOps internally. Creating your own hell I guess.
jonathanlydall 10/27/2024||
I find the “Boards” part of DevOps doesn’t work well for us a small org wanting a less structured backlog, but for components like Pipelines and the Git repositories it’s neither here nor there for us.

What aspects of Azure DevOps are hell to you?

rettichschnidi 10/27/2024||
Some examples, in no particular order.

Hampering the productivity:

- Review messages get sent out before review is actually finished. It should be sent out only once the reviewer has finished the work.

- Code reviews are implemented in a terrible way compared to GitHub or GitLab.

  - Re-requesting a review once you did implemented proposed changes? Takes a single click on GitHub, but can not be done in Azure DevOps. I need to e.g. send a Slack message to the reviewer or remove and re-add them as reviewer.

  - Knowing to what line of code a reviewer was giving feedback to? Not possible after the PR got updated, because the feedback of the reviewer sticks to the original line number, which might now contain something entirely different.
- Reviewing the commit messages in a PR takes way too many clicks. This causes people to not review the commit messages, letting bad commit messages pass and thus making it harder for future developers trying to figure out why something got implemented the way it did. Examples:

  - Too many clicks to review a commit message: PR -> Commits -> Commit -> Details

  - Comments on a specific commit does not shown in the commits PR
- Unreliable servers. E.g. "remote: TF401035: The object '<snip>' does not exist.\nfatal: the remote end hung up unexpectedly" happens too often on git fetch. Usually works on a 2nd try.

- Interprets IPv6 addresses in commit messages as emoji. E.g. fc00::6:100:0:0 becomes fc00::60:0.

- Can not cancel a stage before it actually has started (Wasting time, cycles)

- Terrible diffs (can not give a public example)

- Network issues. E.g. checkouts that should take a few seconds take 15+ minutes (can not give a public example)

- Step "checkout": Changes working folder for following steps (shitty docs, shitty behaviour)

- The documentation reads as if their creators get paid by the number of words, but not for actually being useful. Whereas GitHub for example has actually useful documentation.

- PR are always "Show everything", instead of "Active comments" (what I want). Resets itself on every reload.

- Tabs are hardcoded (?) to be displayed as 4 chars - but we want 8 (Zephyr)

- Re-running a pipeline run (manually) does not retain the resources selected in the last run

Security:

- DevOps does not support modern SSH keys, one has to use RSA keys (https://developercommunity.visualstudio.com/t/support-non-rs...). It took them multiple years to allow RSA keys which are not deprecated by OpenSSH due to security concerns (https://devblogs.microsoft.com/devops/ssh-rsa-deprecation/), yet no support for modern algos. This also rules out the usage of hardware tokens, e.g. YubiKeys.

Azure DevOps is dying. Thus, things will not get better:

- New, useful features get implemented by Microsoft for GitHub, but not for DevOps. E.g. https://devblogs.microsoft.com/devops/static-web-app-pr-work...

- "Nearly everyone who works on AzDevOps today became a GitHub employee last year or was hired directly by GitHub since then." (Reddit, https://www.reddit.com/r/azuredevops/comments/nvyuvp/comment...)

- Looking at Azure DevOps Released Features (https://learn.microsoft.com/en-us/azure/devops/release-notes...) it is quite obvious how much things have slowed down since e.g. 2019.

Lastly - their support is ridiculously bad.

sshine 10/27/2024||
> I'm surprised they are actually using Azure DevOps internally. Creating your own hell I guess.

Even the hounds of hell may benefit from dogfooding.

tazjin 10/27/2024||
houndfooding?
sshine 10/27/2024||
Ain't nothing but a hound dog.
nixosbestos 10/27/2024||
Oh hey I know that name, Stolee. Fellow JSR grad here.
jbverschoor 10/27/2024||
> those branches that only change CHANGELOG.md and CHANGELOG.json, we were fetching 125GB of extra git data?! HOW THO??

Unrecognized 100x programmer somewhere lol

mattlondon 10/27/2024||
I recently had a similar moment of WTF for git in a JavaScript repo.

Much much smaller of course though. A raspberry pi had died and I was trying to recover some projects that had not been pushed to GitHub for a while.

Holy crap. A few small JavaScript projects with perhaps 20 or 30 code files, a few thousand lines of code for a couple of 10s of KBs of actual code at most had 10s of gigabytes of data in the .git/ folder. Insane.

In the end I killed the recovery of the entire home dir and had to manually select folders to avoid accidentally trying to recover a .git/ dir as it was taking forever on a poorly SD card that was already in a bad way and I did not want to finally kill it for good by trying to salvage countless gigabytes of trash for git.

Vilian 10/28/2024|
People who use git in monorepos don't understand git
More comments...