Top
Best
New

Posted by salkahfi 8 hours ago

An Update on GitHub Availability(github.blog)
260 points | 191 commentspage 5
jameskilton 6 hours ago|
Nice, they have availability numbers now on their status page, but they aren't aggregating.

If you multiply all current numbers together (as of Apr 28), you find out that GitHub has a 97.26% uptime.

One ... single ... 9.

They can do better.

embedding-shape 6 hours ago|
Kind of unfair though, do the same for any platform with multiple services and you'd probably get <99% for most of them.

> you find out that GitHub has a 97.26% uptime

Calculating that to "Downtime per day" you get ~40 minutes of downtime per day, almost a week per year. Crazy stuff for something essential like this.

lousken 6 hours ago||
Availability is priority? Does not seem like it is https://mrshu.github.io/github-statuses/
devmor 3 hours ago||
Microsoft has been an abysmal steward of Github - the few nice features it has over self-hosting just aren't worth losing an hour or more of CI/CD downtime during daylight hours every week.

Yesterday was the last straw for me - I've begun migrating my personal private projects and my contracting firm's projects off of github.

sltr 7 hours ago||
One thing is clear: an LLM wrote this.
imrozim 7 hours ago||
As a solo dev GitHub going down is scary all my code, all my history, one platform. This makes me want to keep local backups more seriously.
tosti 5 hours ago|||
Sorry to ask but... Do you have any idea how git works???
2ndorderthought 7 hours ago||
Yea or use another provider like codeberg
maccard 6 hours ago|||
Personally I'd never use codeberg. Their FAQ on licensing [0] is basically everything that anyone who supports free software should abhor - it's "we might allow you to do what you want to".

[0] https://docs.codeberg.org/getting-started/faq/#how-about-pri...

imrozim 6 hours ago|||
True but switching is not that easy when all your ci pipelines and integration on in GitHub.
embedding-shape 6 hours ago||
I don't think it's 100% compatible, but Gitea's/Forgejo's (which Codeberg runs on) own Action implementation is pretty much the same as GitHub Actions, with minor differences.
imrozim 4 hours ago||
Good to know might actually try it for one project 1 st before switching
everfrustrated 7 hours ago||
So they haven't even finished migrating from their datacenters to Azure and have now started a project to add another cloud provider ("multi cloud")? Madness.
bananapub 7 hours ago||
anyone who's actually worked there, could you explain why they're finding scalability and reliability so hard? naively it seems like 'repo groups', ie clusters of repositories linked by being mutual forks, would be fairly isolated for the whole git storage layer, and everything else feels pretty easily parallelisable (issues, actions, etc, modulo taking locks now and then to submit results or whatever). and given that, surely you can incrementally deploy changes across those many shards to avoid most big outages?

are there big conceptual serialisations that I've missed? is it just not well factored? was the move to Azure just a catastrophically bad idea? some other thing?

fontain 6 hours ago||
Almost every high volume service on the internet is write a little, read a lot, and when there are writes, they're relatively small, a few bytes into a database that can fan out. GitHub is very different: constant writes, large files, it is under far more pressure than the systems the rest of us build. And then, as the article says, vibecoding happens, and suddenly they're receiving 30x the volume of expensive operations. GitHub are responsible for many of the performance improvements made to Git over the years, Git scales today because of work GitHub did, but that work was never intended to scale to volume of today.

Even as recently as 18 months ago, Lovable appeared, seemingly overnight, and caused huge problems for GitHub because they were creating repositories on GitHub for every single Lovable project, offloading the very high cost onto GitHub, hundreds of thousands of repositories. A couple of years before that, Homebrew used GitHub as a de facto CDN and that was a huge problem, too.

Nowadays it is easy to imagine how we can scale out a service like Twitter or YouTube or Facebook because everything has been done before, but that's not true of Git, Git hasn't ever scaled like this before, there are very few examples of service with GitHub's characteristics.

https://lovable.dev/blog/incident-github-outage

https://news.ycombinator.com/item?id=42659111

dist-epoch 7 hours ago||
recently there was a twit how GitHub PR diffs had 10 React components PER LINE. And how they optimized that to only 2 React components per line or something.

> To summarize, for every v1 diff line there would be:

> - Minimum of 10-15 DOM tree elements

> - Minimum of 8-13 React Components

> - Minimum of 20 React Event Handlers

> - Lots of small re-usable React Components

https://github.blog/engineering/architecture-optimization/th...

bananapub 7 hours ago||
I'm asking about the infrastructure, obviously they chose for some reason to make my computer fans turn on to show some red and green lines on a text file.
dist-epoch 6 hours ago||
terrible frontend architecture suggests poor engineering culture which typically spreads to all teams, including the infrastructure team
JimmaDaRustla 4 hours ago||
AS IF THEY POST THIS WHILE THEIR SEARCH IS BROKEN, what a circus
agluszak 5 hours ago||
Regarding their image with stats (https://github.blog/wp-content/uploads/2026/04/record-accell...) - what exactly are the ranges on y-axes? I doubt they had close to 0 PRs merged in 2023 ;)
OutOfHere 5 hours ago|
> we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.

I am surprised that Microsoft is allowed to use Go. How long will it be before a bean counter forces a rewrite to a Microsoft favored language?

senderista 21 minutes ago|
They used Go for the new TypeScript compiler!
More comments...