Posted by meetpateltech 1 day ago
In my case I don't want my tools to assume git, my tools should work whether I open SVN, TFS, Git, or a zip file. It should also sync back into my 'human' tooling, which is what I do currently. Still working on it, but its also free, just like Beads.
On the one hand they think these things provide 1337x productivity gains, can be run autonomously, and will one day lead to "the first 1 person billion dollar company".
And in complete cognitive dissonance also somehow still have fantasies of future 'acquisition' by their oppressors.
Why acquire your trash dev tool?
They'll just have the agents copy it. Hell, you could even outright steal it, because apparently laundering any licensing issues through LLMs short circuits the brains of judges to protohuman clacking rocks together levels.
As to why would those company acquire a startup instead of having an agent generate it for them. Why has big tech ever acquired tech startups when they could have always funded it in house? It’s not always a technical answer. Sometimes it’s internal Political fights, time to market, reduce competition, PR reasons or they just wanna hire the founder to lead a team for that internally and the only way he’ll agree is if there is an exit plan for his employees. I sat in “acquire or build” discussions before. The “how hard would it be to just do that?” Was just one of many inputs into the discussion. Ever wondered why big big companies acquire a smaller one, not invest in it, then shut it down few years later?
1. Tom Preston-Werner (Co-founder). 2008 – 2014 (Out for, eh... look it up)
2. Chris Wanstrath (Co-founder). 2014 – 2018
(2018: Acquisition by Microsoft: https://news.ycombinator.com/item?id=17227286)
3. Nat Friedman (Gnome/Ximian/Microsoft). 2018 – 2021
4. Thomas Dohmke (Founder of HockeyApp, some A/B testing thing, acquired by Microsoft in 2014). 2021 - 2025
There is no Github CEO now, it's just a team/org in Microsoft. (https://mrshu.github.io/github-statuses/)
Nat's company Xamarin was acquired by Microsoft in 2016.
HockeyApp wasn't A/B testing, but a platform for iPhone, Mac, Android, and Windows Phone developers to distribute their beta version (like what TestFlight is today to the App Store), collect crash reports (like what Sentry is today), user feedback, and basic analytics for developers.
The Ximian thing I wrote from obviously faulty memory (I now wonder if it was influenced by early 2000s Miguel's bonobo obsession), the rest from various google searches. Should have gone deeper.
https://en.wikipedia.org/wiki/Ximian
Ximian, Inc. (previously called Helix Code and originally named International Gnome Support) was an American company that developed, sold and supported application software for Linux and Unix based on the GNOME platform. It was founded by Miguel de Icaza and Nat Friedman in 1999 and was bought by Novell in 2003
...
Novell was in turn acquired by The Attachmate Group on 27 April 2011. In May 2011 The Attachmate Group laid off all its US staff working on Mono, which included De Icaza. He and Friedman then founded Xamarin on 16 May 2011, a new company to continue the development of Mono. On 24 February 2016, Microsoft announced that they had signed an agreement to acquire Xamarin.
The same very online group endlessly hyping messy techs and frontend JS frameworks, oblivious to the Facebook and Google sized mechanics driving said frameworks, are now 100x-ing themselves with things like “specs” and “tests” and dreaming big about type systems and compilers we’ve had for decades.
I don’t wanna say this cycle is us watching Node jockies discover systems programming in slow motion through LLMs, but it feels like that sometimes.
and there's even a "hide" link.
But I'm skeptical of building this as a separate platform rather than as tooling on top of git. The most useful AI dev workflow improvements I've seen (cursor rules, aider conventions, claude hooks) all succeeded precisely because they stayed close to existing tools. The moment you ask developers to switch their entire SDLC stack, adoption becomes the real engineering challenge - not the tech.
Curious whether the open source commitment means the checkpoint format itself will be an open spec that other tools can build on.
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
Or it does not work that way?
Github has always been mediocre and forgettable outside of convenience that you might already have an account on the site. Svn was just shitty compared to git, and cvs was a crime against humanity.
Completely agree. I moved out of GitHub for my personal projects and I don't miss it a single nanosecond.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another. StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
It's interesting to me that the only thing that made me vastly prefer using Github over bitbucket is that Github prioritised showing the readme over showing the source tree. Such a little thing, but it made all the difference.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
I wish you the best of luck with your startup.
I'm not just running it on code, but on my daily journal, and it produces actionable plans for building infrastructure to help me plan and execute better as a result.
This lesson has been learned over and over (see AppleScript) but it seems people need to keep learning it.
We use simple programming languages composed of logic and maths not just to talk to the machine but to codify our thoughts within a strict internally consistent and deterministic system.
So in no sense are the vague imprecise instructions fed to LLMs the true source code.
I agree - at least with the thesis - that the more we "encode" the fuzzy ideas (as translated by an engineer) into the codebase the better. This isn't the same thing as an "English compiler". It'd be closer to the git commit messages, understanding why a change was happening, and what product decisions and compromises were being designed against.
You could ask that question about all the billions that went into crypto projects.
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
SVN didn't need checkouts to edit that I recall? Perforce had that kind of model.
I am not sure, it seems I did misremember. Though it's possible I was actually working with needs-lock files. I can definitely see a certain coworker from that time to put this on all files :/
Nobody cares if it makes sense, it just has to appear futuristic and avant-garde.
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
my code is 90% ai generated at this point
The described situation for human-written code isn't much better. What actually works is putting a ticket (or project) number in the commit message, and making sure everything relevant gets written up and saved to that centralized repository.
And once you have that, the level of detail you'd get from saving agent chats won't add much. Maybe unless you're doing deliberate study of how to prompt more effectively (but even then, the next iteration of models is just a couple months away)?
I think "provenance gap" or temporal history can be helped by understanding what you have asked agentic systems to write, understand things written, and verified them.
We aren't yet at a point where something large or extended is easily pushed to agentic coding management - your point of provenance and memory is key here.
doesn’t that presume no value is being delivered by current models?
I can understand applying this logic to building a startup that solves today’s ai shortcomings… but value delivered today is still valuable even if it becomes more effective tomorrow.
How do you both hold that the technology is so revolutionary because of its productive gains, but at the same time so esoteric that you better be ontop of everything all the time?
This stuff is all like a weird toy compared to other things I have taken the time to learn in my career, the sense of expertise people claim at all comes off to me like a guy who knows the Taco Bell secret menu, or the best set of coupons to use at Target. Its the opposite of intimidating!
I kinda regret going through the SeLU paper lol back in the late 2010s.