Top
Best
New

Posted by latexr 7 days ago

Let us git rid of it, angry GitHub users say of forced Copilot features(www.theregister.com)
421 points | 302 comments
daemin 7 days ago|
I read this article and then looked at my Github and a few other projects and found no issues created by Copilot. As someone else has said they need to be triggered manually, so therefore it's the same sort of problem as with the Curl project bug bounty, where people would be spamming with automatically LLM generated fictional problems. In that case because there's a potential for money to be made, and in the Github copilot case because I guess they're trying to contribute to open source for whatever reason.

As far as Visual Studio Code goes, I've not really used it much but it makes sense since it's Microsoft's free editor, so you will be a product and you will be marketed to. I do use Visual Studio though, and it does show Copilot in the UI by default, but there is an option to "hide Copilot" from the UI which does what is advertised. I will probably remove my important projects from Github though, but mainly so they are not used for LLM training than anything else.

latexr 7 days ago||
> and in the Github copilot case because I guess they're trying to contribute to open source for whatever reason.

The “whatever reason” can be to build a portfolio to apply for jobs. Or worse, to more quickly build trust to exploit vulnerable projects.

https://www.techdirt.com/2025/09/04/why-powerful-but-hard-to...

pvtmert 6 days ago|||
Whether or not Github themselves create these issues or pull-requests, some bunch of folks will do that (manually) for sure. I mean the Hacktoberfest is coming soon, so is the low-quality typo-fixes. Since now there is Claude-Code, Cursor et. al, I am really curious how people are gonna fight with the pull-request spam. Especially open-source projects which claim they do not accept LLM generated content.

P.S: Most people just do it either to "light-up" their Github profile for job applications or just to get cheap swag...

oefrha 7 days ago|||
Yeah, as a maintainer with fairly popular projects (at least more popular than any project from the linked issue reporters, I’ve checked), I’ve gotten exactly zero Copilot issue or PR. As for useless review comments, lol, nothing beats useless comments from users (+1, entitled complaints, random driveby review approvals serving god knows what purpose, etc.), you probably shouldn’t be doing open source if you’re annoyed by useless comments.

And good luck stopping people from pasting from ChatGPT or Gemini or whatever. Those are free, unlike Copilot agent PRs which cost money, which is part of why I don’t see any.

I guess some people just have too much time and will happily waste on useless complaints.

the__alchemist 7 days ago|||
Same experience. Does anyone have info on this discrepancy in observations?
daemin 7 days ago|||
I read this article after it was shared on social media by the Codeberg.org account so I though it was a PR piece, as it doesn't mention self hosting at all, just moving to another hosted platform.
TiredOfLife 6 days ago|||
The article is by theregister.com. Basically The Onion of tech media
pier25 6 days ago||
You can hide/disable all Copilot features in VSCode.
benrutter 7 days ago||
Tangential, but I think github's secret weapon of inertia is. . .(drumroll) github stars.

They're still seen by a lot of people as a sign of project maturity and use. My unfounded suspicion is if they all dissapeared tomorrow, people would be a lot more likely to try alternative code forges.

I've been using codeberg of late, more because of their politics than anything, but in all honesty the user experience between github/gitlab/codeberg/sourcehut/gitea is near identical.

ivanjermakov 7 days ago||
I think it's a lot harder of an OSS project not hosted on GitHub to find contributors and gain traction in general. Network effect, as always.
pornel 7 days ago|||
It depends where your contributors are coming from. For example for Rust, the crates index is the discovery mechanism. Contributors will come to your repo by whatever link you put in your package's metadata. I've split my Rust packages between GitHub and GitLab and don't see a difference in participation.
LtWorf 6 days ago|||
It's the way I want it to be honest. Keeps the low effort garbage away for the moment.
IshKebab 7 days ago|||
It's one factor but I think they have more important "secret weapons":

1. Network effects; people already have an account.

2. Free CI, especially free Mac and Windows CI.

wiether 7 days ago|||
You can add two more things:

- 2000 minutes of free compute time with GitHub Actions

- free Docker Hub alternative with unlimited pulling (they say that you're limited to 500Mb but I currently have probably +20Gb of images on my Free account)

They have the community aspect AND the freebies

jazzyjackson 6 days ago|||
I never understood going by stars when there's a much stronger signal in how many issues are being tracked and closed. Very easy to see if its software people actually use
zahlman 6 days ago|||
> a much stronger signal in how many issues are being tracked and closed

This is a strong signal, but what it signals is confused. How much of it is the nature of the user base in actually reporting issues? Suppose the project receives regular fixes and issues are promptly closed on average — how much of that is because the project has to constantly respond to external factors, and how much is due to developers doing constant fire-fighting on an intrinsically poor design and not getting around to new functionality? Suppose there are lots of outstanding issues — how many of them are effectively duplicates that nobody's bothered to close yet?

dvfjsdhgfv 6 days ago|||
Yeah, this is the first thing I check, and also I verify a few closed ones to understand how they were handled.
LtWorf 6 days ago||
There are websites to buy stars :D It's like fake reviews.
3np 7 days ago||
I've had an ongoing support ticket with GH for several months now asking them to actually disable Copilot, as there is Copilot all over and it's clear from inlined JSON on github.com pages when signed in that my account is actually opted in to Copilot features despite Settings page saying features should be disabled. I've never ever opted in to anything related to GH AI and am not a vscode user.

They keep closing the ticket and saying it's "with the engineering team". I keep reopening and asking for resolution, escalation, or progress.

GitHub did have working and professional support in the past but in 2025 they are just malicious.

It's surreal.

e40 7 days ago||
Please point to the ticket so we can add to your voice.
3np 6 days ago||
GitHub Support tickets are private and not publically sharable.

I did mention it in this Discussions thread a while back (which the support agent at one point in July hilariously linked me to asking if I had read, making it clear they hadn't done so themselves).

https://github.com/orgs/community/discussions/147437#discuss...

Initial support response mentioned here:

https://github.com/orgs/community/discussions/147437#discuss...

progval 7 days ago||
Do you have an example of "inlined JSON on github.com pages"? I can't imagine what this looks like.
3np 7 days ago||
Not near an account right now but literally just "View Source" when signed in and search for "copilot" and it's there along some other feature-flags for the user in a JSON blob inside a script tag.
63stack 6 days ago|||
Not sure why this is downvoted, I just checked and I do see a json object inside a <script type="application/json" id="client-env"> tag, that has all kinds of copilot related keys.

I checked my profile and copilot is enabled with a "lock" icon, I cannot disable it. I have never enabled it.

YJfcboaDaJRDw 7 days ago|||
[dead]
fatchan 7 days ago||
Github is my push --mirror location, nothing more. Main is a popular Gitlab instance gitgud.io, and I host my own secondary mirror.

Gitlab is of course adding more AI and corpo garbage, and once they prevent disabling these "features" on community editions we'll see a fork of gitlab, probably.

The assertion that github is some bustling hub of opportunity is a strange one. At best you get people more likely to contribute because they already signed up, and a contribution from somebody not willing to sign up to another free service or simply email you an issue report is a contribution worth missing.

clickety_clack 7 days ago||
Yep, the headline on the Gitlab landing page is now “Build software, not toolchains. With native AI at every step.”

I’d love to find a stripped down solution that focused on hosting code repos. I don’t think GitHub see it as their core business anymore.

Hasnep 6 days ago|||
I've been using Codeberg.org recently and really enjoying it, I just wish the CI situation was better, but I'll probably just host my own instead
chaz6 5 days ago|||
My prefernce is Forgejo

https://forgejo.org/

skydhash 7 days ago||
I think it’s mostly people around the JS/Go/Rust ecosystems that tend to be vocal about GitHub being a community. For a lot of projects I couldn’t care less if it was just cgit or gitea.

It’s quite easy to setup git to send patch via email. And you can always use a pastebin to host the diff if you’re sharing ideas. Bit I guess that’s not as visible as the GitHub dashboard.

api 7 days ago||
What is the actual rationale behind some companies literally shoving AI down people’s throats?

It’s fascinating stuff and can be very useful. Why does it have to be rammed so hard? I’ve never quite seen anything like this.

Or maybe I have. It reminds me a little of the obviously astroturfed effort to ram crypto down people’s throats. But crypto was something most people didn’t have any actual utility for. A magic tireless junior intern who had memorized the entire Internet is actually useful.

marginalia_nu 7 days ago||
KPIs are likely the missing part of the puzzle. CEO wants AI engagement to go up, organization makes AI engagement go up.

If users don't want to engage with new AI features, the new AI features become unavoidable so that engagement goes up despite user preferences.

KPIs are a fantastic way for an organization to lose any touch with reality and can drive some truly bizarre decision-making.

api 7 days ago||
Ahh, the reason we lost the Vietnam war.

https://en.m.wikipedia.org/wiki/McNamara_fallacy

marginalia_nu 7 days ago||
"AI is the Vietnam of product management" is a blog post that almost writes itself. Not really my field so if anyone wants to take it for a spin, go ham.
a4isms 7 days ago|||
Is AI the Vietnam of Product Management in the same way that Object-Relational Mapping is the Vietnam of Computer Science?

https://web.archive.org/web/20220823105749/http://blogs.tedn...

marginalia_nu 7 days ago||
Yeah that's what I was alluding to.
a4isms 7 days ago||
A classic of its era, thanks! And there's something enduring about the basic issue of integrating systems that are fundamentally misaligned.
api 7 days ago|||
Have ChatGPT write it

This isn’t AI specific though. The whole industry runs this way because thoughtful decision making doesn’t scale easily. KPIs are easier.

marginalia_nu 7 days ago||
Kinda weird though, like with many things in the industry we seem to be doing things that are far from optimal, but for some reasons organizations that do things differently aren't winning the competition.

Ostensibly most successful software is written in languages that aren't very good, with development methodologies that aren't very good, in organizational structures that aren't very good. Where is the existence proof? Why isn't software written in a good language, using a good methodology with sane management winning the race?

grayhatter 7 days ago||
because quality has never been the driver of survival. It has always survival. The race is a race of endurance, it's not a competition on merits. Humans would undoubtedly be better without scars, but the same anomaly that gave us scar tissue, gave us faster wound healing. Increasing survival. But scars don't go away, when it'd be better if they did. There's no selection pressure there because it's good enough. IBM still exists despite it's inability to make good decisions, it makes decisions that allow it to survive. It could just as easily make different decisions, and I would have named a different company. Not because it made different decisions, but because it survived. Why do companies that survive make these weird decisions? They don't, there is no back pressure. Decisions don't matter, survival does.

The race isn't a competition, it's a death march. If you want to 'win' the death march, prioritize survival above everything else, especially quality and correctness.

(I don't strictly follow this philosophy myself, a good engineer will always ask, why not both. Just make sure you identify endurance as the most important strategy)

marginalia_nu 7 days ago||
This seems like an orthogonal concern. I don't see why quality and correctness would anti-correlate with survivability. I'll ask the same question again, out of all the highly survivable businesses, why are so many seemingly dysfunctional.
grayhatter 7 days ago||
That orthogonalality is my exact point. I believe you're correct; quality and correctness aren't negative pressures to survival. If anything, the should support survival, and I'd assume should also have a slight positive pressure on adoption/growth.

But I'd hope you'd admit quality and correctness aren't free attributes? They do have a cost. I can churn out low quality code way faster than I can produce code I'm proud of. If I attach myself to the quality of the code, and get stumped by some bug, become frustrated, and take a break from project_a, to work on something else, and while working on project_b to "clear my mind", I fall in love with project_b, or it gets more popular, or whatever that "pressure" happens to be... project_a has no remaining developers, and it is still dead now. Thus, quality has had a negative impact on it's survival.

Suitability has a tenuous connection and dependence on quality and correctness. (which I believe are synonyms for the same core idea?)

But why are so many businesses (the ones that still survive) so demoralizingly dysfunctional? Because they're run by individuals who don't value quality and correctness above [other attribute]. When given the choice to increase money (which is effectively the exact same thing as market share, and when talking about survival popularity is the same thing as suitability), or increase quality. They will always make the decision that ensures their survival, (by chance, no by intent, that's the orthogonality). Eventually, they'll turn that knob too far, degrade their quality enough and create an ecological niche for someone else to take over. (A competitor that maybe they acquire before it causes a real risk to it's survival/popularity, again choosing to make money/survive, over a decision targeting quality)

Would *you* rather make money, or write something high quality? I use and love marginalia, so I think I can guess the answer. (Thank you so much for building something that actually meaningfully improves the internet btw!) Are there decisions you could make that would trade the quality to become more popular, or make more money? Yes, I'm sure, but you don't seem to be trying to become the next google.

acuozzo 5 days ago|||
> What is the actual rationale behind some companies literally shoving AI down people’s throats?

It's propping up the US economy and businesses mostly look at B2B signals. Keeping "demand" for AI high at e.g., Microsoft, keeps "demand" high on NVIDIA, CoreWeave, et al.

All of the boats are floating in the bathtub and nobody wants to be the one to pull the drain plug.

Traubenfuchs 7 days ago||
Not old enough for MongoDB? Big Data?
api 7 days ago||
I lived through that but it wasn’t like this. Hype isn’t the same thing as having something rammed down your throat with constant nag pop ups and dark patterns.
dboreham 7 days ago||
It's more like the pop-under era.
AlexandrB 7 days ago||
I find it weird how companies talk out both sides of their mouth on AI. On the one hand it's this magical tool that will make you 10x more efficient at your job, and on the other it's something they have to market heavily and shove in your face at every turn - sometimes outright forcing you to engage with it. These two things don't seem compatible - if the tool was that good people would be beating down their doors to get it.
hyperpape 7 days ago||
From Dan Luu (https://danluu.com/wat/):

> When I joined this company, my team didn't use version control for months and it was a real fight to get everyone to use version control. Although I won that fight, I lost the fight to get people to run a build, let alone run tests, before checking in, so the build is broken multiple times per day. When I mentioned that I thought this was a problem for our productivity, I was told that it's fine because it affects everyone equally. Since the only thing that mattered was my stack ranked productivity, so I shouldn't care that it impacts the entire team, the fact that it's normal for everyone means that there's no cause for concern.

Do not underestimate the ability of developers to ignore good ideas. I am not going to argue that AI is as good as version control. Version control is a more important idea than AI. I sometimes want to argue it's the most important idea in software engineering.

All I'm saying is that you can't assume that good ideas will (quickly) win. Your argument that AI isn't valuable is invalid, whether or not your conclusion is true.

P.S. Dan Luu wrote that in 2015, and it may have been a company that he already left. Version control has mostly won. Still, post 2020, I talked to a friend, whose organization did use git, but their deployed software still didn't correspond to any version checked into git, because they were manually rebuilding components and copying them to production servers piecemeal.

rossdavidh 7 days ago|||
All true, but the argument for AI is that it makes you far more productive as an individual, which if true should be an easy sell. In fact, some developers are quite committed to it, with a fervor I've not seen since the "I'm never going back to the office" fervor a few years ago. Version control is more of a "short term pain for long term gain" kind of concept; it is not surprising some people were hard to convince. But "AI" promises increased productivity as an individual, in the here and now. It should not be a hard sell if people found it to work as advertised.
crazygringo 7 days ago|||
> it makes you far more productive as an individual, which if true should be an easy sell

Writing unit tests where needed makes you more productive in the long run. Writing in modern languages makes you more productive. Remember how people writing assembly thought compiled languages would rot your brain!

But people just resist change and new ways of doing things. They don't care about actual productivity, they care about feeling productive with the tools they already know.

It's a hard sell when an application moves a button! People don't like change. Change is always a hard sell to a lot of people, even when it benefits them.

jacobolus 6 days ago|||
To the contrary, people resist change for good reasons: changes to tools rob attention and focus from the work, often for completely arbitrary or decorative reasons. Sometimes changes remove or break important aspects of the tool and force someone to waste time developing a new workflow which is, on average, no better than the previous one. It is vanishingly rare that the software team making the changes in question did sufficiently rigorous testing to show that the new version is a net "benefit" for most users of the software; they don't have time for that. All too often, no significant group of users was even consulted about the changes, which were made for reasons like advancing someone's career ("shipped X feature changes") or looking different for the sake of marketing something merely re-arranged as new ("the old style was so 2018").

The teams making changes to software are, on average, moderately worse than the teams who originally developed the software, if only because they missed out on the early development experience, and often don't fully understand the context and reasons for the original design and don't reason from first principles when making updates, but copy the aspects they notice superficially while undermining the principles they were originally established on.

Even when the changes are independently advantageous, it is common for changes to one part of a system to gratuitously break a variety of other parts that are dependent on it. Trying to manage and fix a complex web of inter-dependent software which is constantly changing and breaking is an overwhelming challenge for individual humans, and unfortunately often not a sufficient priority for groups and organizations.

ThrowawayR2 6 days ago||||
> "Remember how people writing assembly thought compiled languages would rot your brain!"

No, I don't remember that and I've been around awhile. (I'm sure one could find a handful of examples of people saying that but one can find examples of people saying sincerely that the earth is flat.) It was generally understood that the code emitted by early, simple compilers on early CISC processors wasn't nearly as good as hand-tuned assembly code but that the trade-off could be worthwhile. Eventually, compilers did get good enough to reduce the cases where hand-tuned assembly could make a difference to essentially nothing but this was identified through benchmarking by the people who used assembly the most themselves.

If you want to sell us on change, please stop lying right to our faces.

compiler-guy 6 days ago||
Note also that it took, more or less, a hardware revolution in the form of RISC, to make compilers able to compete. A big piece of the RISC philosophy was to make it easier for compiler writers.

They eventually got there, (and I expect AI will eventually get there too), but it took a lot of evolution.

tines 6 days ago||
Really? X86 isn’t RISC and it ruled the world during, not before, the time of compilers.
hyperman1 6 days ago|||
Starting with the 386, the ISA got a lot more compiler friendly. Up to 286, each register had a specialised task (AX,CX,DX,BX means Accumulator, Count, Data,Base register). Instructions worked with specific regs only (xlat, loop). When 386 and 32 bits happened, the instructions became more generic and easily combinable with any register. I remember people raving over the power of the SIB byte or the possibility to multiply any pair of register. While not RISC, it got clearly more easy for compilers to work with the ISA, and I remember reading in magazines that this was an explicit design intention.
compiler-guy 6 days ago|||
Lots of x86 assembly out there from that time period. Beating the compiler in the eighties and nineties was a bit of a hobby and lots of people could do it.

Modern ISA designers (including those evolving the x86_64 ISA) absolutely take into account just how easy it is for a compiler to target their new instructions. x86 in modern times has a lot of RISC influence once you get past instruction decode.

non_aligned 6 days ago||||
> Writing unit tests where needed makes you more productive in the long run.

Debatable? It has positive effects for organizations and for the society, but from a selfish point of view, you gain relatively little from writing tests. In your own code, a test might save you debugging time once in a blue moon, but the gains are almost certainly offset by the considerable effort of writing a comprehensive suite of tests in the first place.

Again, it's prudent to have tests for more altruistic reasons, but individual productivity probably ain't it.

> Writing in modern languages makes you more productive.

With two big caveats. First, for every successful modern language that actually makes you more productive, there's 20 that make waves on HN but turn out to be duds. So some reluctance is rational. Otherwise, you end up wasting time learning dead-end languages over and over again.

Second, it's perfectly reasonable to say that Rust or whatever makes an average programmer more productive, but it won't necessarily make a programmer with 30 years of C++ experience more productive. This is simply because it will take them a long time to unlearn old habits and reach the same level of mastery in the new thing.

My point is, you can view these through the prism of rational thinking, not stubbornness. In a corporate setting, the interests of the many might override the preferences of the few. But if you're an open-source developer and don't want to use $new_thing, I don't think we have the moral high ground to force you.

hamburglar 6 days ago|||
> In your own code, a test might save you debugging time once in a blue moon

It’s much more than this. You feel it when you make a change and you are super confident you don’t have to do a bunch of testing to make sure everything still behaves correctly. This is the main thing good automated tests get you.

genghisjahn 6 days ago|||
What are 20 dud languages that have been hyped on HN? Not meaning to snark, serious question.
compiler-guy 6 days ago||||
Early compilers really did suck. They were long term big wins for sure, but it wasn't unreasonable for someone who was really good at hand assembly, on tightly constrained systems, to think they could beat the compiler at metrics that mattered.

Compilers did get better, and continue to--just look at my username. But in the early days one could make very strong, very reasonable, cases for sticking with assembly.

haskellshill 7 days ago||||
> Remember how people writing assembly thought compiled languages would rot your brain!

Well, how'd you describe web apps of today if not precisely brainrot?

> They don't care about actual productivity, they care about feeling productive

Funny you'd say that, because that describes a large portion of "AI coders". Sure they pump out a lot of lines of code, and it might even work initially, but in the long run it's hardly more productive.

> It's a hard sell when an application moves a button!

Because usually that is just change for the sake of change. How many updates are there every day that add nothing at all? More than updates that actually add something useful, at least.

bigstrat2003 6 days ago|||
> Change is always a hard sell to a lot of people, even when it benefits them.

You're assuming that the change is beneficial to people when you say this, but more often than not that just isn't true. Most of the time, change in software doesn't benefit people. Software companies love to move stuff around just to look busy, ruin features that were working just fine, add user hostile things (like forcing Copilot on people!), etc. It should be no surprise that users are sick of it.

krinchan 7 days ago|||
As someone who started out a GenAI skeptic, I’ve found the truth is in the middle.

I write a TON of one off scripts now at work. For instance, if I fight with a Splunk query for more than five minutes, I’ll just export the entire time frame in question and have GHCP (work mandates we use only GHCP) spit out a Python script that gets me what I want.

I use it with our internal MCP tools to review pull requests. It surfaces questions I didn’t think to ask about half the time.

I don’t know that it makes me more productive, but it definitely makes me more attentive. It works great for brainstorming design ideas.

The code generation isn’t entirely slop either. For the vast majority of corporate devs below Principal, it’s better than what they write and its basic CRUD code. So that’s where all the hyper productive magical claims come from. I spend most of my days lately bailing these folks out of a dead end fox hole GHCP led them into.

Unfortunately, it’s very much a huge time sink in another way. I’ve seen a pretty linear growth in M365 Copilot surfacing 5 year old word documents to managers resulting in big emails of outdated GenAI slop that would be best summarized as “I have no clue what I’m talking about and I’m going to make a terrible technical decision that we already decided against.”

clickety_clack 7 days ago||
What is GHCP?
hydhyd 7 days ago||
It appears to be GitHub Copilot
clickety_clack 7 days ago||
Ah! I was trying to fit 4 words into the acronym, like “GitHub Hosting Cloud Platform” or something.
dcminter 7 days ago||||
It's an excellent point - but a lot of the pressure to use AI in orgs is top-down and I've never seen that with useful tech tools before; they always percolated outward from the more adventurous developers. This makes me wary of the AI enthusiasm, even though I acknowledge that there is some genuine value here.
bwfan123 7 days ago|||
I felt the same way. The analogy I use is management dictating the tech-stack to use across the org. It does not make any sense ! They need to stay in their lanes, and let engineering teams decide what is best for their work.

Big tech's general strategy is get-big-fast - and then become too-big-to-fail. This was followed by facebook, uber, paypal, etc. The idea is to embed AI into daily behaviors of people whether they like it or not, and hook them. Then, once hooked, developers will clamor for it whether it is useful or not.

wetpaws 7 days ago||
[dead]
crazygringo 7 days ago||||
> I've never seen that with useful tech tools before

I've seen it all the time. Version control, code review, unit testing, all of these are top-down.

Tech tools like git instead of CVS and Subversion, or Node instead of Java, may be bottom-up. But practices are very much top-down, so I see AI fitting the pattern very well here. It feels very similar to code review in terms of the degree to which it changes developer practices.

dcminter 7 days ago|||
Nope, all of those things were dev driven until they'd diffused out as far as management and only then did they start getting enforced top-down. Often in awful enterprise software ways actually.
crazygringo 7 days ago||
But that's what I'm saying.

Obviously developers invented these things and initially diffused the knowledge.

But you're agreeing with me that they then got enforced top-down. Just like AI. AI isn't new or different like this. Developers started using LLM's for coding, it "diffused" so management became aware, and then it becomes enforced.

There's a top-down mandate to use version control or unit testing or code review or LLM's. Despite plenty of developers initially hating unit tests. Initially hating code review. These things are all standard now, but weren't for a long time.

In contrast to things like "use git not Subversion" where management doesn't care, they just want you to use a version control.

dcminter 6 days ago|||
Sigh, enforced is always top down, sure, if you want to be pedantic. But normally the process starts with enthusiastic devs, propagates out through other devs until a consensus is reached (e.g. source control is the only sane way) and then management starts to enforce it - often with a crappy enterprise take on the basic idea (I'm looking at you IBM Team Connection and Microsoft Visual SourceSafe).

AI seems to have primarily been pushed top-down from management long before any consensus has been reached from the devs on what it's even good for.

This is unusual; I suspect the reason is that (for once) the tech is more suitable for management functions than the dev stuff. Judging from the amount of bulletpointese generation and condensation I've seen lately anyway.

crazygringo 6 days ago||
It's not pedantic, it's the very issue being discussed.

And there have been plenty of enthusiastic devs regarding LLM's.

And the idea that "until a consensus is reached" is just not true. These practices are often adopted with 1/3 of devs on board and 2/3 against. The whole point of top-down directives is that they're necessary because there isn't broad consensus among employees.

It was the same thing with mobile-first. A lot of devs hated it while others evangelized it, but management would impose it and it made phones usable for a ton of things that had previously been difficult. On the balance, it was a helpful paradigm shift imposed top-down even if it sometimes went overboard.

dcminter 6 days ago||
Do you know a lot of devs who, having tried VCS, were against it?
crazygringo 6 days ago||
I lived through the transition, so absolutely.

Early VCS was clunky and slow. If one dev checked out some files, another dev couldn't work on them. People wouldn't check them back in quickly, they'd "hoard" them. Then merges introduced all sorts of tooling difficulties.

People's contributions were now centrally tracked and could be easily turned into metrics, and people worried (sometimes correctly) management would weaponize this.

It was seen by many as a top-down bureaucratic Big Brother mandate that slowed things down for no good reason and interfered with developers' autonomy and productivity. Or even if it had some value, it wasn't worth the price devs paid in using it.

This attitude wasn't universal of course. Other devs thought it was a necessary and helpful tool. But the point is that tons of devs were against it.

It really wasn't until git that VCS became "cool", with a feeling of being developer-led rather than management-led. But even then there was significant resistance to its new complexity, in how complicated it was to reason about its distributed nature, and the difficulty of its interface.

watwut 6 days ago|||
No, not just like AI. The difference is that these things were pushed by people on the bottom for years and run successfully before management top caught up. Like, years and years.

AI does not have such curve. It is top down, from the start.

watwut 7 days ago|||
None of them was top down in companies I worked in at the time. They were all stuff developers read about and then pressured management and peers to start using.

Management caught up and started to talk about them only years later.

dcminter 6 days ago||
Yeah, RCS was what, early 80s? Devs I knew were mostly on CVS by mid 90s and around the time Subversion became common (late 90s) things like PVCS and Visual Source Safe were starting to be required by management. Perhaps a bit earlier with super technical orgs. That's a much more typical flow.
nlawalker 7 days ago||||
I think it's coming from both places, it's just that the top-down exhortations are so loud and insistent.

I wasn't around to experience it but my understanding is that this is what happened in the 90's with object oriented programming - it was a legitimately useful idea that had some real grassroots traction and good uses, but it got sold to non-technical leadership as a silver bullet for productivity through reuse.

The problem then, as it is now, is that developer productivity is hard to measure, so if management gets sold on something that's "guaranteed" to boost it, it becomes a mandate and a proxy measure.

bink 7 days ago||
I think that's a good comparison. I was around in the 90s and I do remember OOP being pushed by all sorts of people who weren't coders. It was being pushed as the "proper" way to code regardless of the language, size, platform, or purpose of the program in question.
kace91 7 days ago||||
We might be in the rare case where the current smoke and mirrors fad in leadership happens to be something actually useful.

Let’s not let the smoke and mirrors dictate how we use the tool, but let us also not dismiss the tool just because it’s causing a fad.

dcminter 6 days ago||
I'm wary rather than skeptical I think. There's clearly value here. Whether we're paying the true costs or not, however, won't be clear until all the VC fumes have cleared.

Much like the internet era actually - obviously loads of value, but picking out the pets.coms from the amazon.coms ... well, it wasn't clear at the time which was which; probably both really (we buy our petfood online) except that only one of them had the cash reserves to make it past the dot com crash.

groby_b 7 days ago|||
AI is the first dev tool that makes a difference that is immediately noticeable even for higher layers, that's why they apply pressure.

The core problem, as OP called out, is change aversion. It's just that for many previous useful changes, management couldn't immediately see the usefulness, or there would've been pressure too.

Let's not forget that well-defined development processes with things like CI/CD, testing, etc only became widespread after DORA made the positive impact clearly visible.

Let's face it: Most humans are perfectly fine with the status quo, whatever the status quo. The outward percolation of good ideas is limited unless a forcing function is applied.

whateveracct 6 days ago||
Execs can suddenly reliably measure productivity? Or does AI just give them the easier to measure, short term benefits.
groby_b 5 days ago||
They certainly realize when work is significantly accelerated. They might miss that it doesn't apply to all kinds of work equally, but they see some things go multiples faster, they see the folks behind those things using AI, and they draw conclusions.
whateveracct 2 days ago||
Their view of work is roadmap chicanery tho
hn_throwaway_99 7 days ago||||
Surprisingly enough, and pretty ironic given this discussion is about GitHub, the company Dan Luu is talking about there is Microsoft (specifically the SmartNIC team), based on his Linked description of his 2015-2016 job.
bgwalter 7 days ago||||
Version control has quickly won. It was so popular that people kept writing new systems all the time. CI was popular. Most major open source projects had their own CI systems before GitHub.

"AI" on the other hand is shoved down people's throats by management and by those who profit from in in some way. There is nothing organic about it.

hyperpape 7 days ago||
Version control is almost 50 years old. It has very slowly won.

AI adoption is, for better or worse, voluntarily or not, very fast compared to other technologies.

Calavar 6 days ago|||
Version control took a while because most early version control systems were brittle and had poor developer UX. Once we got mercurial and git and nice web UIs, the transition was actually pretty fast IMHO.

The same could be true for coding agents too, or maybe not. Time will tell.

jabwd 7 days ago||||
.... which is the problem here. The internet took decades. The iPhone didn't change anything this quickly either. We're seeing massive brain rot in many studies, no real world data that actual shows productivity gains.

This adoption rate / shoving is insane. It is not based on anything but dollars.

utyop22 7 days ago||
The way I think of it is the difference between financial wealth and real wealth.

No new real wealth can be created but financial wealth may transfer from the firms buying stuff to the large tech firms - thereby creating new financial wealth for big tech stockholders. In the long run the two should converge - but in the short run they can diverge. And I think that’s what we are seeing.

therein 6 days ago|||
The fervor with which some feel the need to defend AI is what is incredible. Adoption, innovation, impact, not so much.

The attempt to compare it with Version Control, with sliced bread, with plumbing and sanitization practices. Think of any big innovation and compare it with it until people give in and accept this is the biggest bestest thing ever to have happened and it is spreading like wildfire.

Even AI wouldn't defend itself this passionately but it conquered some people's hearts and minds.

CamperBob2 7 days ago||||
Sounds like a company full of seriously-terrible developers, from which no valid general conclusions can be drawn.

I use AI a lot myself, but being forced to incorporate it into my workflow is a nonstarter. I'd actively fight against that. It's not even remotely the same thing as fighting source control adoption in general, or refusing to test code before checking it in.

sys_64738 7 days ago|||
[flagged]
hluska 7 days ago||
[flagged]
tho2342o349423 7 days ago|||
At this point, pretty much all of the US markets (and the USD) is hinging on "unlimited upside" promised by techbros and their magic AGIs & robots. They probably get orders from all the way up the food chain to keep the show going.

Wonder what'll happen to JPY once the Yen-carry unwinds from this massive hype-cycle - will probably hit 70 JPY to the dollar! Currently Sony Bank in Japan offers USD time-deposits at 8% pa. - that's just insanely high for what is supposed to be a stable developed economy.

chubot 7 days ago|||
They probably get orders from all the way up the food chain to keep the show going.

Honestly I think the same thing happened with self-driving cars ~10 years ago.

Larry Page and Google's "submarine" marketing convinced investors and CEOs of automakers and tech companies [1] that they were going to become obsolete, and that Google would be taking all that profit.

In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go

How much profit is Waymo making now? I'm pretty sure it's $0. And they've probably gone through hundreds of billions in funding

How's Tesla Autopilot doing? Larry also "negatively inspired" Elon to start OpenAI with other people

I think if investors/CEOs/automakers had known how it was going to turn out, and how much money they were going to lose 10 years later, they might not have jumped on the FOMO train

But it turns out that AI is a plausible "magic box" that you extrapolate all sorts of economic consequences from

(on the other hand, hype cycles aren't necessarily bad; they're probably necessary to get things done. But I also think this one is masking the fact that software is getting worse and more user hostile at the same time. Probably one of the best ways to increase AI adoption is to make the underlying software more user hostile.)

[1] I think even Apple did some kind of self-driving car thing at one point.

bookofjoe 7 days ago|||
Apple car project

https://en.wikipedia.org/wiki/Apple_car_project

>From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car,[1] codenamed "Project Titan".[2][3] Apple never openly discussed any of its automotive research,[4] but around 5,000 employees were reported to be working on the project as of 2018.[5] In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform.[6] In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars.[7] In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars.[8] In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024,[9] but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later.[10]

In February 2024, Apple executives canceled their plans to release the autonomous electric vehicle, instead shifting resources on the project to the company's generative artificial intelligence efforts.[11][12] The project had reportedly cost the company over $1 billion per year, with other parts of Apple collaborating and costing hundreds of millions of dollars in additional spend. Additionally, over 600 employees were laid off due to the cancellation of the project.[13]

andrepd 7 days ago||
[flagged]
tomhow 6 days ago|||
Please don't post ideological flamebait like this on HN. We've had to ask you repeatedly to observe the guidelines in recent months. Hacker News is only a place where people want to participate because others make an effort to keep the standards up. Please do your part to raise the level, rather than dragging it down. This line from the guidelines is particularly relevant:

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

https://news.ycombinator.com/newsguidelines.html

ch4s3 7 days ago|||
Ahh yes, capitalists noteworthy haters of building trains. If you ignore the private companies that built the NYC subway system(s), all of US freight rail, and invented trains.
andrepd 7 days ago||
It's no coincidence all your examples are over 100 years old lol. The rise of the individual automobile (a very lucrative business proposition in many aspects) was done alongside massive sabotage of competing alternatives, with the disastrous consequences that are today plain.

It's also no coincidence America has built no rail in many decades while centrally planned China built a massive HSR network in the past 15 years.

ch4s3 5 days ago|||
Surely you don’t think economics is the only difference here. You could even more easily explain it with a lack of property protection in China or the population density along the east coast.
evidencetamper 6 days ago|||
Do you have any references?
chubot 7 days ago||||
Also, I think Hacker News mostly believed the hype about self-driving cars, with relatively little pushback. Many people were influenced by what the CEOs/investors said, and of course the prospect of jobs and "cool tech"

e.g. in 2018, over 7 years ago, I was simply pointing out that people like Chris Urmson (who had WORKED ON self-driving for decades) and Bill Gurley said self-driving would take 25+ years to deploy (which seems totally accurate now)

https://news.ycombinator.com/item?id=16353541

And I got significant pushback

Actually I remember some in-person conversations with MUCH MORE push back than that, including from some close friends.

They believed things because they were told by the media it would happen

People told me in 2018 that their 16 year old would not need to learn how to drive, etc. (In 2025, self-driving is not available in even ONE of their end points for a trip, let alone two end points)

Likewise, at least some people are convinced now that "coding as a job is going away" -- some people are even deathly depressed about it

pessimizer 7 days ago|||
Hacker News fell for f'n 3D TVs.

Hacker News goes for anything that they think they might be able to make money off of, just like all middle-class people. They evaluate events based on how they could affect them personally. Actual plausibility isn't even secondary, they simply defer to the salesmen (whom they admire and hope one day to be.)

hluska 7 days ago|||
[flagged]
chubot 7 days ago||
[flagged]
scns 6 days ago||
People believe what they want. Ergo they side with the people who state that those beliefs are true, makes them feel good. Listening to experts who know better and oppose the held beliefs makes the believers feel bad, ergo they won't believe them.

Sometimes it is good to disregard the opinion of experts who are absolutely sure something can't be done, might by a prerequisite to making it happen.

The four minute mile comes to mind.

Beliefs are powerful, they can enable you to reach goals, become prisons of the mind trapping you or become delusions when feedback is disregarded.

rozab 6 days ago||||
I recently watched Not Just Bikes' video on the disastrous future side effects of self-driving cars[0]. Of course it made me think about the massive PR push that made us think they were around the corner, but also about the manufactured consent for these technologies in the first place. Right now this kind of discussion is hitting the mainstream with the 'clanker'[1] backlash. I think it's really obvious to a lot of people that the AI push is not organic and is not based around consumer needs, and this manipulation is making people genuinely angry[2] (ok jreg is a performance artist, but just because something is performative doesn't mean it's not real).

[0]: https://youtu.be/040ejWnFkj0?si=7yI3eKkirJdTWPwR [1]: https://en.wikipedia.org/wiki/Clanker [2]: https://youtu.be/RpRRejhgtVI?si=aZUVcsY8VyR_jbBA

bee_rider 7 days ago||||
Wonder how far along we’d be on the path to self driving without any hype cycles…

I suspect stuff like lane following assist and adaptive cruise control

1) will ultimately provide the path to self driving eventually

2) wasn’t particularly helped by the hype cycle

1 is impossible to say at his point, for 2 I guess somebody who works in the field can come along and correct me.

marcosdumay 7 days ago||
There are commercial self-driving cab services operating in a couple dozen cities right now.

That's where we are.

jibe 7 days ago||||
GM has 6 month attention span, abandoning self driving is suicidal short term thinking.

Waymo has been slow and steady, and has built something pretty great.

AlotOfReading 6 days ago||||

    In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go
It was shut down because they had a collision that made front page news across the country which was followed by a cover-up. Their production lines were shut down, all revenue operations ceased, and the permits they needed to operate were withdrawn. It's not like the decision was random.

    How much profit is Waymo making now? I'm pretty sure it's $0. 
Profit is a fuzzy concept for even the most transparent private companies, but Waymo's revenue is likely in the hundreds of millions. They've received around $12B in funding, not hundreds of billions.
fragmede 6 days ago||
Waymo's done 10M rides*. If we hand wave $10 per ride, that's $100M. Which is way more money than I have, but not all that much. It's still much bigger than $0 though!

* https://www.cbtnews.com/waymo-hits-10m-driverless-rides-eyes...

AlotOfReading 6 days ago||
That number was originally announced in May. They've completed an additional 3M rides since then if they've done no additional scaling. Their average fares are also closer to $15-20 than $10.
Rover222 7 days ago|||
Not really your main point, but Tesla self driving is quite incredible, despite what internet clickbait says. They have a clear path to full autonomy with vision-only systems.

But yeah, certainly 5-7 years behind the initial schedule. Which I guess was more of your point.

karlshea 7 days ago|||
You’re still falling for it. They have a clear path to vision-only autonomy IN THE BAY AREA.

Let’s see it work in Minnesota in the winter where you can’t see lane markings, everything is white, and the camera lenses immediately get covered with road salt spray.

bink 7 days ago|||
Heck, I'm concerned how they're going to work in the Bay Area after an earthquake or cell network outage.
Rover222 6 days ago||||
Yeah I'm falling for it by using it every day in 95% of my driving. You're falling for media stories.
j45 7 days ago|||
For now.

It's important to not confuse activity, with progress, with results.

At the same time, it's important to not confuse or downplay results, with progress, with activity.

There seems to be activity, progress, and results. It seems to be speeding up.

I don't have any preference for or against Tesla. Just observing.

jcgrillo 7 days ago||
> For now.

What can incremental progress do to make a camera see through road salt deposited on its lens? I call bullshit. There isn't any incremental path because it's not physically possible. The photons are stopped by the salt. No amount of "AI" or what the fuck ever else will change this. There is no path towards "progress" here.

j45 7 days ago|||
My understanding is lenses should be inside the windshield, and a system should not oeprate if it can't see.

I don't operate from an assumption that cameras will remain the same as they are today.

Your comment did remind me about Comma, though.

https://comma.ai/

paradox460 7 days ago||||
Just for the sake of argument, they could use spinning lenses like you do on a camera in inclement weather
jcgrillo 7 days ago||
Yeah or some sort of washer/wiper system, but there's much better, safer technology for this. They could just use it.
Rover222 6 days ago||
The front bumper cameras already have a spray wash
Rover222 6 days ago|||
You'll eat your words much sooner than you think. The cameras don't need much clarity to work effectively (they work quite well in intense rain). The main forward camera is behind the windshield already.
chubot 7 days ago||||
OK, but I think it will end up being more than 25 years behind schedule, taking into account the claims

which is what people like Chris Urmson and Bill Gurley already said prior to 2018 (see my sibling comment)

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

We're going to end up with complete autonomy

Ultimately you'll be able to summon your car anywhere … your car can get to you. I think that within two years, you'll be able to summon your car from across the country

---

(Also, in 2018 I said I'd be the first to buy a car where I could sleep behind the wheel while going from SF to Portland or LA. That obviously doesn't exist now.

Anyone want to take a bet on whether this will be possible in 2032, 7 years from now? I'd bet NO, but we can check in 2032 :-) )

bgwalter 7 days ago|||
It will coincide with the Year of the Linux Desktop.
Rover222 7 days ago||||
Teslas are already driving an hour alone to deliver themselves from the factory.
hluska 7 days ago|||
25 years is far fetched. Again, you obviously have a bone to pick because HN disagreed with you, but this obsession with yours is such that you’re no longer making sense. 25 years?? Totally insane.
goku12 7 days ago||
To be fair, I don't see any sort of technical arguments on either side to justify their claims. You need some clue about its internal design and its current state to make an educated guess about the expected development time. Without that, 25 years is as valid a guess as 5 years. But I won't dismiss any claims outright. I'm all ears if anyone has any explanation to offer.
wetpaws 7 days ago|||
[dead]
bwfan123 7 days ago||||
I would say the same of the following:

1) crypto: raise funding, buy crypto as collateral, raise more funding with said collateral, rinse and repeat.

2) gpu datacenters: raise funding, buy gpus as collateral, raise more funding, buy more gpus, rinse and repeat.

3) zero day options: average folks want a daily lottery thrill. rinse and repeat.

All of the above are fed by fomo and to some extent hype, and ripe for a reckoning.

bookofjoe 7 days ago|||
FWIW when I lived in Japan in 1968-69 it was 360 JPY to the dollar. I felt like a millionaire!
jsheard 7 days ago|||
GitHub isn't even the worst example of this at Microsoft, they didn't just force AI on Office users but also tricked them into paying extra for it. They unilaterally switched all personal and family accounts over to AI-enabled plans that were 30-40% more expensive, and hid the option to revert back to the old plan such that it's only offered as a last resort if you try to cancel your subscription.
alphazard 7 days ago|||
Organizations, once they reach a certain size, are usually not self consistent. Organizations are made up of people, and each person wants different things and has different incentives. It takes an excellent leader to make an organization appear consistent, it's not the default at all.

Marketers are trying to keep their jobs, sales people are trying to keep their jobs, etc.

mhh__ 7 days ago|||
I bet this was true of computers back in the day too. The processes that are native to computers are magical but adding computers to the old is actually quite bad e.g. paperwork is better done on paper
skydhash 7 days ago||
You bet wrong. Computers were pricey enough that if you want one, you have to really need it to justify the price. It was not forced on any business.
mhh__ 7 days ago|||
How long ago? I was born after the pedants Millenium so I'm expecting my definition of back in the day is different to yours

I think my time frame is firmly after the invention of excel but before the web was it's own thing

warmedcookie 7 days ago||||
Yep, electronics in general too. People today complain about GPU prices, but that was the norm for everything electronic related.
brabel 7 days ago|||
What are you talking about?? There was lots of resistance to computers at office jobs. Even through the 90s lots of people were still avoiding moving away from the old way and companies had to spend lots of money on training because without that people quickly reverted back to their old ways! I remember that and was taught how to provide such training and how to convince people to adopt new tech! It’s always been a challenge.
cmiles74 7 days ago|||
The less I know about a thing the more useful an LLM seems to be. I’m working with a new-to-me enterprise code base, the LLM helps me find related (and duplicate) code. Even here it’s usefulness has an expiration date, eventually I’ll know where stuff lives and I’ll use it less and less. Life experience tells me I’m not unique and I suspect the constant cram-AI-into-the-thing is because the vendors are hoping, eventually, they’ll find a use-case for LLMs that sticks.
immibis 7 days ago|||
Yes, they've legitimately good at a few things (e.g. very fuzzy search), but that doesn't justify the amount of investment.
goku12 7 days ago||
The environmental damage is even worse than the investment. Those who have the money to invest in it usually care only about the returns and not the environment.
tempodox 7 days ago||
Hardly anyone even mentions it in discussions. Even as far as these tools are actually useful, nobody ever asks whether it’s worth the environmental costs.
cshores 7 days ago||
For Google's Gemini LLM, the energy impact is negligible, with the average prompt consuming the equivalent energy of just three seconds of a microwave's operation.
tempodox 6 days ago|||
All those data centers full of GPUs aren’t running on solar or wind power.
cshores 6 days ago||
I did a bit of research on the environmental impact with regards to the United States. Recent numbers suggest that ChatGPT handles about 2.5 billion prompts per day worldwide, with roughly 330 million of those coming from the United States. Since the U.S. population is about 335 million, that works out to about one prompt per person per day on average, though actual users issue several times more.

On the energy side, Google recently estimated that an average Gemini inference consumes around 0.24 Wh, which is roughly the same as running a microwave for a single second. Older rule-of-thumb comparisons put the figure closer to 3–6 seconds of microwave use, or about 0.8–1.7 Wh per prompt. If you apply those numbers to U.S. usage, you get somewhere between 79 MWh and 550 MWh per day nationally, which translates to only a few to a few dozen megawatts of continuous load. Spread across the population, that works out to between 0.09 and 0.6 kWh per person per year — just pennies worth of electricity, comparable to a few minutes of running a clothes dryer. The bigger concern for the grid is not individual prompts but the growth of AI data centers and the energy cost of training ever-larger models.

immibis 6 days ago|||
How much energy is otherwise consumed by a Google search?
cshores 6 days ago||
I’m not entirely sure, since it seems that a very slimmed-down version of Gemini has been attached to search. It’s definitely not the full Gemini 2.5-Pro that engineers use to carefully reason through answers. Instead, it relies mostly on tool calling to stitch together a response.
giancarlostoro 7 days ago|||
This is the correct way. Use the LLM dont let it become the only way you work.
pylua 7 days ago|||
Is it really that weird that a company would speak from both sides of their mouth ? That is essentially what corporate speak is. That should be the assumed default when a major company says anything.
garyfirestorm 7 days ago|||
I would like to point out that in certain scenarios people are not very smart. For eg. Many car enthusiasts who care about speed and 0-60 will still look down on EVs despite EVs being ridiculously fast and cheap to attain those metrics. 40k EV is faster than 150k Porsche. But these guys will never adopt it.
vluft 7 days ago||
that's not mostly what car enthusiasts care about, especially somebody buying a 150k porsche; they care about handling and road feel and a fat pig of an EV will never match a lighter car (well set up) on that; you can't beat physics when you're slinging that much weight; even set up as well as can be, a 5000lb taycan doesn't come close to the handling feel of a 3250lb 911.
hedora 6 days ago||
A BMW i3 weighs about 3000 lbs. They’re mostly fiberglass and have a small battery. The center of gravity is probably comparable to the 911 even though it’s tall and goofy looking (all the weight is in the battery, under the seats).

The tire geometry causes a bit of oversteering, but they generally corner well, etc.

j45 7 days ago|||
Just like someone who can figure out how to write code to solve a problem can do it while other programmers say it's not possible and just do it some other way, the same is true of AI.

One of the major issues I'm seeing is how much technical people haven't been involved in the application of AI, which leaves non-technical people to pontificate and try.

With any new tech, after the hype is gone, what remains, is adopted and used.

The internet, social media, smartphones, all seemed foreign.

LLMs are no different. They will solve things other things haven't before.

LLMs are only as good as the users using it. Users only get better at using AI but putting in the repetitions. It's not a tool that's alive or a psychic.

cshores 6 days ago|||
I generally agree, but I think there is a real disconnect. Middle and upper management often do not understand how developers and engineers are actually supposed to use these tools.

For example, I work in operations, so most of what I touch is bash, Ansible, Terraform, GitHub workflows and actions, and some Python. Recently, our development team demonstrated a proposed strategy to use GitHub Copilot: assign it a JIRA ticket, let it generate code within our repos, and then have it automatically come back with a pull request.

That approach makes sense if you are building web or client-side applications. My team, however, focuses on infrastructure configuration code. It is software in the sense that we are managing discrete components that interact, but not in a way where you can simply hand off a task, run tests, and expect a PR to appear.

Large language models are more like asking a genie. Even if you give perfectly clear instructions, the result is often not exactly what you wanted. That is why I use Copilot and Gemini Code Assist in VS Code as assistive tools. I guide them step by step, and when they go off track, I can nudge them back in the right direction.

To me, that highlights the gap between management’s expectations and the reality of how these tools actually work in practice.

nativeit 6 days ago|||
LLMs are very different. All of the other things you mentioned found wide, eager adoption among the broader public within two years.
nativeit 6 days ago||
They were/are also profitable.
bigstrat2003 6 days ago|||
You're 100% correct. People love tools which make them more productive. If AI was actually as good as the companies pushing it claim it is, they wouldn't have to push it.
delusional 7 days ago|||
The critical realization to connect those two ideas is that they don't believe in what they tell you. They are telling you what _needs_ to be true for them to be geniuses.
yunwal 7 days ago|||
The other explanation is that it’s everywhere because AI pushers would like to integrate it with everything in order for it to be its most useful. Most of them don’t own your OS and your password manager, so they push it instead into a million different little places.

Doesn’t change the fact that it’s stupid, annoying, and bad design, but I don’t know that outright deception is needed to explain it.

mrandish 6 days ago|||
> if the tool was that good people would be beating down their doors to get it.

Yes! "Forced features" are a misguided effort to drive internal usage metrics. There are other ways to let users know about new features, short of forcing it on them obnoxiously.

johndhi 6 days ago||
I think investors like to see an adoption. So companies force users to say it - and then brag that their users love their AI so much they're all using it.

It's a rather perverse cycle.

_Algernon_ 7 days ago|||
Sunc cost fallacy. If they admit that people don't want it (at least enough to cover the costs of it) share holders will question all the investments into Copilot. So instead they push it down our throats.
tempodox 7 days ago|||
That’s what you get for making yourself dependent on profit-driven entities on a one-sided basis (namely that they have all the power and you have all the risk). Of course they will force the stuff on you that they hope will make them the most profit, up to the limit where you’d eat the switching costs and run away.
throwawa14223 7 days ago||
Firefox also shipped their battery draining ai feature and they don’t have identical motives.
nativeit 6 days ago|||
Yes, this is what’s happening. I think it pretty well speaks for itself. It’s almost entirely hype. It’s got significant utility, just nowhere near enough utility to justify its astronomical costs and wastefulness.
gtsop 6 days ago|||
> if the tool was that good people would be beating down their doors to get it.

Microsoft is a software company. They wouldn't have released this to begin with to have an extreme competitive edge!

pkaeding 7 days ago|||
They aren't really talking out of both sides, it is just all full-court-press marketing.
madeofpalk 6 days ago|||
I don’t think a company marketing a product/feature is an indicator that it’s bad.
progval 7 days ago|||
Or they believe that people are too stupid to understand how good their product is.
redox99 7 days ago|||
Why wouldn't they do everything in their power to increase customer base?
goku12 7 days ago||
I don't think that this is an effective strategy to achieve that. At some point, the same customer base is going to feel the AI fatigue and yearn for something cleaner.
kogasa240p 6 days ago|||
Because the current LLM hype bubble is due to Silicon Valley wanting another SaaS service to keep the VC money flowing.
chickenpotpie 7 days ago|||
When the shopping cart was first introduced to grocery stores, nobody wanted to use it. People preferred to continue lugging around heavy baskets rather than push a cart. Actors had to be hired to walk around the stores pushing them around to convince people it normal and valuable to use them.

Sometimes people are resistant to use things that improve their life and have to be convinced to work in their own self interest.

https://www.cnn.com/2022/05/14/business/grocery-shopping-car...

goku12 7 days ago||
There are places around the world where shopping carts were introduced successfully without the accompanying actors to convince the customers to use it. The actual criteria must be whether the new addition boosts or hampers the customers' productivity, at least in the long run.

When I first heard about git, I knew that it would be very useful in the future, even if I had to spend some time and effort in mastering it. Same with CI, project planners, release engineering, etc. Nobody had to convince me to use them. But AI just doesn't belong to that category, at least in my experience. It misses results that a simple web/site search reveals. And it makes mistakes or outright hallucinates in ways even junior developers don't. It's in an uncanny valley between the classic non-AI services and plain old manual effort with disadvantages of both and advantages of neither. Again, others may not agree with this experience. But it's definitely not unique to me. The net gain/loss that AI brings to this field is not clear. At least not yet.

dijit 7 days ago||
I agree, if you’ll allow me to diatribe my thoughts about why this could be (without thinking that these are my actual firm opinions);

Since right now there is an aire of competition, I would guess that these companies believe its winner-take-all, and are doing their “one monopoly to aid another” to get this market before theres another verb-leader (like chatgpt for llm, or google for search).

It could also be that they think that people won’t know how good they are until they try it, that it has to be seen to be believed. So getting people to touch it is important to them.

But, I think I agree with you, its so heavy handed that it makes me want to abandon the tools that force it on me.

zzzeek 7 days ago||
can someone explain to me if this is real? I run many high profile OSS projects that are all hosted on Github. I've yet to see any issues or PRs generated by AI, when PRs come in, I've never seen an AI code review pop in. I've seen maybe one or two people trying to answer discussion questions where they obviously used an LLM but that wasn't copilot, it was just individual people trying to be clever. Why am I not seeing this happen on my repos?

it's just the copilot popups that are hardcoded in vscode right now despite no extension being installed, that are very annoying and I'd like those to go away.

Leynos 7 days ago|
I suspect it's either fantasy or fabrication.
Y_Y 7 days ago||
How on earth was Microsoft allowed to buy such a critical piece of tech infrastructure?
politelemon 7 days ago||
It wasn't critical at that time.

But then who made it critical over the intervening years? That's on us.

It's easy to knee jerk on HN but let's try to do better than this.

gchamonlive 7 days ago|||
When Microsoft bought GH it was already the most popular forge by far, which is why it was bought in the first place.

> But then who made it critical over the intervening years? That's on us.

That's blaming the victim. The vast majority of the opensource projects were hosted on GH since before Microsoft's acquisition. I remember back in 2018 when my team made the decision to move from bitbucket to GitHub, the main consideration was the platform quality but also the community we were getting access to.

topaz0 7 days ago|||
Not me.
layer8 7 days ago|||
GitHub isn’t critical infrastructure, it’s only real USP is network effects.
transcriptase 7 days ago||
If outages make headlines and stop whole companies in their tracks worldwide, that’s critical infrastructure, not just network effects.
netsharc 7 days ago|||
Gotta love the genius of creating a single point of failure out of a distributed (version control) system...
transcriptase 4 days ago||
I’ve had the same thought about crypto. The anonymous p2p financial system where the only realistic way for the average person to participate is to send photocopies of government issued IDs to one of the few remaining large exchanges who will happily provide any government with your identity and a paper trail of your actions upon request.
_Algernon_ 7 days ago||||
Git is designed so that you always have the full code you're working on copied to your local machine. Github being down for a short time from time to time should be only a minor inconvenience.
wiether 7 days ago||
Sure, but GitHub is much more than a git repository. Otherwise companies wouldn't pay for it.

As the centralized git repo, it allows devs to collaborate, by exchanging code/features, tracking issues and doing code reviews. It also provides dependencies management ("Package") and code building/shipping (GH Actions).

Sure, if you usually spend one day or more writing code locally, you're fine. But if you work on multiple features a day, an outage, even of 30 minutes, can have a big impact on a company because of the multiplier effect on all the people affected.

ReptileMan 7 days ago||||
>and stop whole companies in their tracks worldwide

This is a sign that their CTOs should be replaced. Not that github is critical.

Disposal8433 7 days ago|||
> If outages [...] stop whole companies in their tracks

They should fucking learn how to code because no one in their right mind would depend on such an external service that can be easily replaced by cloning repos locally or using proxies like Artifactory. Even worse when you know that Microsoft is behind it.

Yes, most companies don't have good practices and suck at maintaining a basic infrastructure, but it doesn't mean GitHub is the center of the internet. It's only a stupid git server with PRs.

yunwal 7 days ago||
> It's only a stupid git server with PRs.

I feel like you’re missing a few features here

Disposal8433 6 days ago||
Which ones and what are those exclusive features that GitLab doesn't have?
Spooky23 7 days ago|||
How on earth did anyone believe Microsoft was different this time?
diggan 7 days ago|||
They used Emojis and printed "Microsoft <3 Open Source" on posters for conferences, so clearly they really had changed...
reaperducer 7 days ago|||
How on earth did anyone believe Microsoft was different this time?

There's a whole generation on HN who came up after Microsoft's worst phase, and have spent the last five years defending MS on this very forum.

They're convinced that any bad thing Microsoft does is a "boomer" grudge, and will defend MS to the end.

I hope I'm never so weak-minded that I tie my identity and allegiance to a trillion-dollar company. Or any company that I haven't founded, for that matter.

Spooky23 7 days ago||
End of the day, PR works. Even in peak “friendly” Microsoft, they were hard nosed and noxious to negotiate with.
andrewinardeer 7 days ago|||
Was GitHub really critical at time of purchase? Or has Microsoft turned it into critical infrastructure?
daemin 7 days ago|||
Even though Git is decentralised, people like having a simple client-server model for version control. So with Github being the most funded free Git hosting service it grew to being the biggest. They also built out the extra services on top of git hosting, the issue tracker, CI/CD, discussion board, integrated wiki, github-pages, etc.

I would say all of those things were present before the acquisition, enough that Microsoft itself started to use the site for its own open source code hosting.

rs186 7 days ago||||
If you travel back to 2018 and ask random software engineers "are git and github developed and owned by the same company", a fair number of them would say yes, just like today.
diggan 7 days ago||||
> Was GitHub really critical at time of purchase?

Do you think they would have bought it otherwise? Same for NPM, they got bought for huge sums of money because they were "critical" already.

andrewinardeer 5 days ago||
I am of the opinion that it wasn't critical infra, but it was at least unique infra. Similar to LinkedIn, which MS acquired. It wasn't that LI was critical it was because it was unique.

And since the acquisition, they have built it out to be critical. Similar to what META did with Instagram. Instagram wasn't critical when META purchased it, but now it is the cornerstone of any business's online presence as it has been built out.

oytis 7 days ago|||
It was the leading git storage at the time of acquisition, for many people synonymous with git itself
airstrike 7 days ago|||
There is no law against that, so I'm not sure what you're suggesting.

And git lives on regardless of GitHub

latexr 7 days ago||
> There is no law against that

Regulators can (and do) stop purchases which can be considered harmful to consumers. Just look at the Adobe/Figma deal.

bapak 7 days ago|||
If GitHub were to close tomorrow, you'd lose out on the social part temporarily, but there are effectively dozens of providers and solutions that could replace it.

The same could not be said for Figma, where if lost, you'd end up looking at the company that tried to buy it. That's what those laws are for.

airstrike 7 days ago||||
No, Adobe/Figma was stopped because it would severely reduce competition in a market where there are already very few relevant players. That's all they can block.
dboreham 7 days ago|||
[flagged]
gitaarik 6 days ago|||
They were allowed to buy it because GitHub is not licensed with an FOSS licence. How on earth did we all settle on such a propietary piece of tech infrastructure? No wonder Microsoft bought it.
mvdtnz 7 days ago|||
"Critical"? "Infrastructure"? What do you think Github is?
chamomeal 7 days ago|||
Critical piece of tech infrastructure. Which is absolutely is.

When GitHub goes down, the company I work at is pretty much kneecapped for the duration of the outage. If you’re in the middle of a PR, waiting for GitHub actions, doing work in a codespace, or just need to pull/fetch/push changes before you can work, you’re just stuck!

1over137 7 days ago||
Wow. Why would your company do that? It's easy to self-host gitlab for example.
wiether 7 days ago||
It's probably easy to self-host Gitlab for a small team working on a limited number of projects.

It's definitely not easy to self-host Gitlab for hundreds of devs working of hundreds of projects. Especially if you use it as your CI/CD pipeline, because now you have to also manage your workers.

Why company chose to pay GitHub instead of self-hosting their Gitlab instance? For the same reason they pay Microsoft for their emails instead of self-hosting them.

zbentley 7 days ago|||
Among other things, a CDN. If it were to take a sustained outage, lots of important online systems would stop working shortly thereafter. And I’m not talking about developer tools; bigger sites/apps than you think are reliant on GH being up. Stupid to do that, sure, but widespread.
9cb14c1ec0 7 days ago|||
Microsoft either owns or hosts on Azure a lot of critical pieces of tech infrastructure apart from just Github.
dboreham 7 days ago||
Who would disallow them to do so?
djoldman 7 days ago||
> The second most popular discussion – where popularity is measured in upvotes – is a bug report that seeks a fix for the inability of users to disable Copilot code reviews.

From the discussion:

> Allow us to block Copilot-generated issues (and PRs) from our own repositories

> ... This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct¹.

> Note: Because it appears that both issues and PRs written this way are posted by the "copilot" bot, a straightforward way to implement this would be if users could simply block the "copilot" bot. In my testing, it appears that you have special-cased "copilot" so that it is exempt from the block feature.

How does one see that a user, e.g. "chickenpants" submitted an issue or PR that was generated by "Copilot"? Isn't there only one creator?

rdm_blackhole 7 days ago|
I am not sure if it's just me but the Github UI has become incredibly slow.

On bigger PRs, I regularly have diffs that take seconds to load. The actions also started hanging a lot more often and will run for 30 minutes stuck in some kind of loop unless they time out or I cancel them manually. This did not use to happen before or least not as frequently as now.

Finally when I try to cancel the hung actions, the cancel button never gets disabled after I click it and it is possible to click it multiple times without any effect. Once clicked, surely it shouldn't be possible to click it again unless the API calls failed.

Clearly there is a quality decrease happening here.

rsynnott 6 days ago|
> I am not sure if it's just me but the Github UI has become incredibly slow.

Making things work properly is terribly passé in this brave new world of magic nonsense-generating robots.

You see this with Google Docs, too; after about a decade of stagnation, Google _finally_ started adding a few features (basic Markdown support, say, better comments, a few other bits and pieces) around 2022... And it finally got a bit less slow. But now that seems to have come to a shuddering halt; once more Docs stagnates, but it has about a hundred Gemini buttons now! It also feels like it's getting slower and buggier again.

More comments...