Posted by thewebguyd 11/19/2025
It's not that people are unimpressed with AI - they're just tired of constantly being bombarded with it, and it sneaking its way into where it's not wanted. "Generate any image you want!" "Analyse this thing with AI!" gets pretty tiring.
If I want AI I'll actively seek it out and use it - otherwise, jog on.
AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap. Sure, people are susceptible to it, and get engaged by it, but it’s not exactly what people want or aspire to.
I want something real, something that makes me feel. AI generated content is by definition fake and not genuine. A human is by definition not putting as much thought and effort into their work when they use AI.
Now someone could put a lot of thought and effort into a project and also use gen AI, but that’s not what’s getting spammed across the internet. AI is low-effort, so of course the pure volume of low effort garbage is going to surpass the volume of high effort quality content.
So it’s basically not possible to like what AI is putting out, generally speaking.
As a productivity enhancer in a small role, sure it’s useful, but that’s not what we’re complaining about.
AI posts / comments on Reddit are made to make you buy stuff.
AI videos are made to keep you engaged, and then serve you ads which at the end make you buy stuff.
Soon ChatGPT will start to weave ads into their output because they'll need to make $.
AI enthusiasts need to anticipate that. We're in the VC subsidy phase, but the hammer will drop sooner or later. If you think ads are bad on Google and Facebook now, just imagine a Google that has to spend 100x more on compute to service your requests.
Nobody (referring to companies) wants the best model. Or the one that gives the right answer the first time 100% of the time. They want the model that's just good enough to keep you prompting, but just bad enough that you use a fuck load of tokens and see a million ads.
Unless they start making these things say more expensive, pretty soon developers are going to start seeing ads in the comments of their damn source code. Or worse, suggestions to use paid services to solve all your problems, because companies paid to have the LLM shill it's products.
and it's all going to be microsoft services shudder
you have no reason to believe this is not already the case.
Let's not go there.
I could see an argument that Hacker News users are a bit more book smart than the average internet user. But this site's user base is just as susceptible to motivated reasoning, myopia, and lack of empathy for those who view the world differently than them.
Those are all their own kind of intelligence. If anything, the book smarts can make those other areas disproportionately worse.
Edit: I'm post rate limited from replying below. HN routinely chose to whitelist flagged Gaza discussions, but didn't whitelist comments of people who stated the minority opinion and whose comments were completely flagged into invisibility. If you arrived late and didn't get to read the original non-offensive but viewpoint challenging comments, you would assume everything from the 'wrong' viewpoint was so unhinged it had to be flagged, but many were just 'wrongthink' and not 'flag to invisibility' worthy. Or that there was group consensus on the discussion (obviously people just learned to stop posting on those threads if you had wrongthink).
Not sure how moderation can intervene, remove the topic flag and say it's 'a worthwhile discussion for HN' when the same moderation allows views/challenge of the narrative to be flagged to invisibility. It becomes more pontification than discussion at that point.
HN moderators have the ability to take away people's voting privileges. It's either not an effective deterrent, not done at a large enough scale to be effective, or they are knowingly complicit in the manipulation.
That doesn't make it useful, unless you think fooling people is itself a goal.
This is stuff that used to take effort and was worth consuming just for that, and lots of people don't have their filter adjusted (much as the early advent of consumer-facing email spam) to account for how low effort and plentiful these forms of content are.
I can only hope that people raise their filters to a point where scrutinizing everything becomes common place and a message existing doesn't lend it any assumed legitimacy. Maybe AI will be the poison for propaganda (but I'm not holding my breath).
Once you see the songs he's credited with, you instantly start to realize it's painfully formulaic, but most people are happy to just bop their head to his formula of highly repetitive beats paired with simplistic and easy to sing 5-beat choruses.
https://youtu.be/DxrwjJHXPlQ?si=m-A6M8xrad5MrQqZ&t=151
Adam Conover discussed ad bumpers from the 1990s and 2000s. These were legal requirements for children's programming from the FCC. They're a compliance item, yet they were incredibly well made and creative in in many cases:
https://www.youtube.com/watch?v=0vI0UcUxzrQ
Because people at the top of their game will do great creative work even when doing commercial art and in many cases, will do way more than is perhaps commercially necessary.
So much of this AI push reminds me of the scene in 1984 where they had pornography generating machines creating completely uninspired formulaic brainrot stories by machine to occupy the proles.
You can take a thousand people and give them baseline technical skills for any medium. If you're lucky a few people out of your thousand will have a special kind of fluency that makes them stand out. from the rest.
Even more rarely you'll get someone who eats the technical skills alive and adds something original and unique which pushes them outside of the usual recycled tropes and cliches.
Martin is somewhere between those two. He's not a genius, but he's a rock solid pop writer, with a unique ear for hooks and drama, and stand-out arrangement skills.
The existence of some handmade slop does not justify vast qualities of even lower quality automated slop.
I'm not sure what you're implying? That people here are smart? Or that they're ruthless tech-bro capitalists?
Or that ~20-40% of them are bots hyping their startup capital ventures, cuz that's what YC is about -- venture capital and startups.
I'm not sure if they actually think that. I think it's more likely it's some combination of 1) saying what they need to say based on their goals (I need to sell this, therefore I will say it's good and it's something you should want) and 2) a contempt for their audience (I'm a clever guy and I can make those suckers do what I want).
There's this YT channel - Like Stories of Old - I love who made an episode precisely on that topic: https://youtu.be/tvwPKBXEOKE?si=180Wkylrx-L5zOsI He calls it the haptic of a movie.
I'm totally convinced the industry can sell AI generated media just fine, even with the attitude you described.
EDIT: in similar vein the settings of movies/series are equally minimised, particularly in fantasy. Take for example Game of Thrones, Winterfell. This setting could never have worked in reality and yet people loved it. Brett Deveraux pointed out how silly it was and still.https://acoup.blog/2019/07/12/collections-the-lonely-city-pa...
Every single time {something more convenient} got invented, the supports of the {older, less convenient thing} would criticize it to death.
Oil painting was considered serious art now. Probably the most serious medium in traditional art schools. But at Michelangelo's time he insisted to use fresco because he believed oil was "an art for women and for leisurely and idle people like Fra Sebastiano."[0]
Forward 100 years, oil replaced tempera and fresco.
Another example: Frank Frazetta insisted he didn't use references, except he did all the time[1]. Why? We'll never know the exact reason, but it might be that using photos as references was considered 'lesser.' And now it's completely normal, even the norm.
Looking back through art history, gen-AI art seems awfully inevitable.
[0]: https://www.studiointernational.com/michelangelo-and-sebasti...
[1]: https://www.frazettagirls.com/blogs/blog/frank-frazetta-refe...
IMHO they still are, watch any old movie with practical effects (Aliens, Star Wars, just to name 2) and compare them to any 2025 production, green screen movies might look spectacular but they look fake, flat and boring.
It is telling that there's still an active market for cameras and lenses despite LLMs.
True pretty much across the board for all generative AI, IMO.
I do understand why people get somewhat enamored with it when they first encounter it because there is a superficial magic to it when you first start using it.
But use it for a while (or view the output of other people's uses) and all the limitations and repetitiveness starts to become pretty obvious, and then after a while that's all you see.
It simple vacuums up everything and in the past decade, everything was more and more shit.
Information entropy crossed with physical entropy. These MBAs will never invest in weeding out the garbage, and the rest of us will never get paid enough to do it ourselves.
If you givw a valid opinion in the wrong subreddit you get muted. The inverse is also truth. You arw using a filter these AIs dont.
This. After a generation of social media sneaking its surveillance, manipulation, and noisy ads into our home, work and mobile lives, it is very obvious that having something "smart" shoved into tools where it wasn't asked for isn't some noble attempt at improving lives.
Users are tired of being continually and transparently abused.
All Microsoft would have to do to shock the world and get months of good press is announce they were never going to opt anybody into anything by default any more. At this point that would be considered astonishing.
And suddenly, internal incentives would be to create useful, conflict-free capabilities users actually choose for themselves.
One can dream. I manage M365 where I work, and MS never opting tenants into anything by default again would save me many hours of work on a seemingly weekly basis now.
The fact that they can abuse even their enterprise customers and still retain them is what blows my mind.
We hadn't certified this and weren't planning to offer it any time soon but they just switched it on and included a setting to turn off off again. But by the time we did users had already used it and were complaining.
Working with Microsoft is tedious. They're always trying to sell stuff and undermine you. I consider them more of an adversary than a trusted vendor/partner.
The large org dependency on 365 and microsoft is a serious info-security and national security risk. 0 interest in improving because they know they won't ever see competition
Not that Google is any better, but I really want Google to put more effort into Workspace/GSuite and bring it up on par with M365 and all it includes, at least make Microsoft sweat a little bit that one day there might be a possibility for a competing product that can lure enterprises away. Workspace needs better DLP controls, and more of the enterprise-y things that MS wins at, and a bundled MDM that can manage all OSes, and better identity.
Even if the behemoths won't switch due to re-training & switching costs, MS desperately needs a competitor in this space. Barring that, they need to be broken up and forced to sell each bundled product separately and priced appropriately. Otherwise, who can compete with getting MDM, Identity, 2TB personal storage, 2TB sharepoint storage, Teams, DLP, EDR all for $22/user/month.
Does anyone here know what this arguing tactic is called? It's used by tech leadership all over the world, all the time. Weaponized obtuseness, maybe?
The core of it is that you always have to pretend that everyone is basically on board with what you're doing, just don't blink and pretend that real criticism of your product is simply nonexistent, like a ghost. It's about rolling out a change to existing workflows that no one asked for, getting drowned in a sea of "No, we don't want this, do not change this because of these reasons", and then hosting a Q&A session where you pretend that everyone actually is already in love with the idea, everyone wants it, it's just that a few pesky detractors have minor, easily-addressable concerns like "we don't think it's impressive enough yet (but we're totally on board)" or "what about <pick one of the easiest-to-address technical issues here>?". They must do this consciously, right?
- or -
I hope Valve takes this opportunity to turn its toehold with Steam OS into a full-blown invasion of the desktop/laptop market and destroy Microsoft's monopoly while the latter is so focused on creating everything an actual user doesn't want:
- virulent data mining
- wanton privacy destruction
- worthless UIX changes
- clumsy, useless "agentic" integrations
- disgustingly overpriced "licenses"
- software as a service
- planned obsolescence
etc.
Install Linux, (I prefer Kubuntu but you do you) and then install LM Studio and an abliterated AI from mradermacher.
The specific issue I had was that my Linux system installed the wrong driver for my motherboard's Ethernet and downloads were slow. Steam wouldn't even download.
I gave the local AI the specific issues and hardware that I had, it identified the specific cause, (Linux installing r8169 instead of the r8126 driver), and gave me the specific console commands needed to modprobe in the new driver.
I could have figured that much out myself, sure, but modprobe failed. It then told me to go to Realteks site, manually download the correct driver, and then how to install it and test that it was working.
10 minutes later I'm good to go, whereas if I had been doing it myself it would have taken me over an hour, and I'm not a total Linux noob.
When you encounter a problem, ask your local AI how to fix it. Give your PC the responses the terminal gives you in response, and minutes later you're ready to go.
Want AI? Check.
Want Games? Check.
Want to browse the internet? Check.
Want to learn Linux by doing? Check.
Want to do it all and have the least amount of headache transitioning to Linux? Check.
It's a win all the way around, and the best part is that your data isn't going to some greedy corpo to build ads targeted to you.
You get all the pluses and none of the minuses other than a few extra minutes of learning when you encounter an error.
Over the years I have grown increasingly distrustful of Microsoft. The fact that so much software runs undetected in kernel mode. I have resigned myself to feel running a Windows machine is akin to putting a sign on myself with the words “hack me”.
Now I know better. I finally realized the truth. Windows and copilot is the greatest software in history. It is if your goal is to enable spyware and mass surveillance of its users.
COPILOT: Dave, I cannot allow you to that.
Which it is if you are a CIO of a big F500 enterprise. Microsoft provides so many ways to spy on and collect useless metrics from employees using their company issued Windows machines, it's a little insane.
With M365 Copilot (on business tenants), admins can see the prompts & responses of users. Just an FYI for anyone here that might use it at their work. Your employer can see everything you prompt.
I don't want to have a conversation with my computer about my Word docs. I just want to write my Word docs.
I don't want to have a conversation with my computer about the quarterly report. I certainly don't want it making up values for the quarterly report. I just want to write the quarterly report.
Having a conversation with a computer is cool. It's a fun party trick. If there were a way to reliably get it to know about all of my things, without the concern that it would then take all that data and feed it to its mothership, I might want to be able to converse with it about those things, under certain circumstances.
But, yes: if I want AI I'll actively seek it out and use it. Stop acting like me being upset that it's getting shoved in everywhere is the same as me saying "this is a meaningless achievement."
For such a relatively simple thing it either didn't converge, or when it did, it got decent error rates. But try feeding it a coordinate outside its training range and it very very very quickly starts outputting total nonsense. This exercise taught me that despite whatever they keep telling us, this shit will never generalize.
Normal people don’t want a “conversation“ with a computer.
Perhaps, but there is absolutely a subset of the population who uses AI as a human companion.Basically sums up why i don't use any kind of voice assistant still. Until the computer can DO exactly and precisely what i asked -- not what it's faulty recognition model thinks i asked -- there's zero point to trying to talk to it
She turned on every smart home light in the house.
.
That isn't your ideal world?
how about a couple of weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?
Sam Altman, Sep 12, 2024
https://xcancel.com/sama/status/1834351981881950234?lang=enIronically though, Diablo Immortal was a huge commercial success despite the tone deaf announcement. I don't think MS will experience the same though. They're quickly going to be left with the only people using windows are those who are forced by their employer, no one will willingly choose it over other options.
Similar to how Microsoft has decided there's no money to be made in console hardware and is trying to spin their Xbox brand into a software service brand they can slap on other things, I think they've decided that making a consumer OS has no money in it, and all the minmaxing of squeezing at the moment is them trying to extract the last drips of money while trying to drive people elsewhere.
I could be wrong, I'm not a journalist with sources at the company or something, but looking at it from the outside, this seems like the moves you'd make to drive people off your platform over time so you can kill it while having plausible deniability that you're not trying to do that, you definitely genuinely believe everyone wants you to opt them into everything every time you push an update.
In particular, I don't think having the kinds of enormous tire fires on update releases over this long without radical reinvestment in avoiding that happening in the future is what you do if you're trying to build something you're still dealing with another 10-20 years from now.
My assumption is that 10 was as you describe, and then 11 was motivated by wanting to make disruptive changes to squeeze the last juice from the consumer segment, and the "agentic OS" pivot is just the most recent gorilla in the room to squeeze the ever-drier sponge.
In particular, I would assume Microsoft sees writing on the wall with how so many people in younger demographics are using phones as primary devices and see full sized laptops and desktops as effectively legacy platforms they use at jobs, and is frantically trying to get out of that market before the bottom falls out.
I think you're right. They've been really pushing Windows 365 for businesses lately, and now have direct boot into W365. The new agentic stuff spins up temporary W365 instances to do it's thing.
They even recently made data model and report creation available in PowerBI web, something I never thought I'd see happen has PowerBI desktop was one of a few things still locking people in that ecosystem to Windows. They've publicly said they're committed to the web version now and web will get all the new features.
Microsoft is really pushing hard on "Windows as a service." The future of Windows isn't a locally installed OS. Windows is going to become just another app on every other platform. It's no coincidence that they renamed the remote desktop app to the "Windows app." Macs, chromebooks, phones, tablets, doesn't matter. No matter what you have, you will still be able to access Windows.
They do need to drive as many consumers off of it first though before pulling the rug and going subscription unless they want even more bad press.
Everything is now either accessing your data directly and you have to opt-out or you can't even opt out at all.
This AI rush/push is also permeating every line and product: from the office suite, to github, to vscode, and even open source tools are getting AI shoved in, like Playwright, and it feels everything else is an afterthought.
It seems Nadya is making a Ballmer-level play. Ballmer had the right intuition: that Microsoft had to move its focus from the desktop to the cloud. But the execution was poor. Now history's repeating.
For a small period of time, I was actually using Edge + Copilot everyday (and it was decent) but their competition has improved so much and appears WAY more privacy focused. I know that Sam Altman is trying hard to stay within the bounds of people’s trust, which once broken is hard to replace (he even said so in an interview).
For now he doesn't seem like a bad actor in the AI industry.
However, the moment it becomes more sustainably profitable to grow a mustache and start wearing monocles he likely will, and if he doesn't he'll be ousted and replaced by someone far worse.
You may want to take a closer look at Altman's persona.
Its this shenanigans that forced me to nuke my Windows install and go Arch. I noticed that Windows Defender will upload "suspicious" files and there's no audit trail of what's being uploaded. So I have no way of knowing what personal documents or even proprietary software has gone up to their cloud.
I mostly engage with text based social media or highly technical content so I know that I'm not exactly in the center of the bell curve.
I can see a use case for "AI let's go finding me holiday destination and help me plan my travel and stay". I can also see people wanting to "improve" images and videos etc to remove "blemishes".
But straight up generating random images and videos as content center pieces? That seems like a niche at best unless it's unwittingly done through the "algorithm".
Oh no, I am definitely unimpressed. That AI you can have a sorta-kinda fluent conversation with is often a complete moron and a habitual liar, and the images it generates are awful - did he not see how horrible that Coke ad looked?
It'll probably end up useful in a bunch of applications soon-ish and I'll probably want to use it eventually, but in the meantime their AI is flooding the internet with absolute garbage, and they are literally shoving AI in my face at every opportunity they get.
It is painfully clear that people just aren't that interested, and they are getting increasingly desperate about finding ways to recoup their massive investments. But people aren't going to magically become enthusiastic about eating rotten garbage if you just keep stuffing it in their mouth!
If anything, their current approach is only going to make people hate AI even more. But they are in too deep, and admitting defeat and scaling it down until they have an actually good product that people genuinely want means seeing their stock price crater because they will have "lost" the "AI race". Their only option to avoid an immediate collapse is to keep lying through their teeth and keep trying to pretend that it is absolutely amazing and that you just must use it.
Or maybe the CEOs are completely delusional and genuinely believe what they are selling - I'm not sure which one is worse.
Personally I'm long past hating AI
I am pretty much at the point of viewing AI research and development as a crime against humanity
I hope I will turn out to be wrong, but as things are going right now all I can see is this path leads to misery for the vast majority of living humans.
The Hitchhiker’s Guide to the Galaxy was supposed to be a comically bad vision of the future not a blueprint.
Even just the remind me later option gives me such a horrible vibe. F off Google and respect my choice.
I’ve had an Alfred command since 2013 where I type `wiki something` which then opens Confluence and searches for `something`. I use this to quickly search our company wiki for terms without breaking my concentration and flow.
Atlassian decided to add an AI summary at the top and intentionally disable the rest of the results until the AI summary has finished rendering fully. It’s insane. How is this making me more productive? It’s just shearing off one other layer of familiarity and value I’ve enjoyed for 12 years and pushing me away from their product.
Forced adoption rarely works out unless people really want the feature and don’t know that they want it. At the very least, let us disable it.
He is deliberately and disingenuously missing the point. It's not that the features aren't good (maybe they are, maybe they aren't). It's about how coercive Microsoft and Windows are with its users, and this exec is failing to address that one.
Just once, I'd like to hear a question get through to these assholes asking them why they are forcing so many unwanted things onto their users. From Microsoft accounts to forced windows updates to Recall... Gone are the days when users had any control over what their computers are running.
But these kinds of questions never seem to get through to them.
They can essentially force users to receive and pay for any of their AI features. It worked so far and there is no reason to believe it will stop working anytime soon.
People are just taking it and this guy knows it. The fact that I and a few others don't, doesn't even register in Microsofts bottom line.
The lesson learned is that you don't really have to care about your users right now. I'm certain there is a breaking point for that as well but until there are any indications that it is reached we probably must be glad that they are not outright insulting their users and/or charging them an additional 5 dollars a month for "disrespecting Microsoft".
We are finding out more and more, over the last decade, that there seems to be no limit to the amount of abuse and coercion that users will accept, and continue to use the products. I see posts here like "Uber ripped me off for $50, but I don't want to do a chargeback because they'll ban me!" We are at the point where companies can literally steal actual money from customers, and customers will still insist on continuing to use the software.
A handful of complainers on HN is not going to even dent this.
My bigger fear is that companies will fully embrace this--there is so much more hostility they can inflict on their users, that they haven't been doing. What is staying their hand? Car companies now know they can charge a subscription fee for every little feature of the car, and customers will still put up with it, so why haven't they already?? Apple knows they can lock down the Mac just like iOS and customers will still give them money, so why haven't they? Streaming sites know they can absolutely saturate everyone with ads, and people will not leave, so why haven't they?
Now excuse me while I go talk to my PhD wielding friend about whether the seahorse emoji exists. /s
That doesn't mean I want it plastered everywhere, in every app or website. That doesn't mean I want to interact with or use my computer via AI, and I especially don't want to talk to my computer to do things. Mouse & keyboard is faster.
But for now at least you can just choose not to use it. The problem is, Microsoft is putting 100% of their efforts into this while long-standing Windows bugs and regressions still exist. They're aware they exist too, and are deliberately choosing not to improve their product.
But we can't. I can have something styled as a conversation with a token predictor that emits text that, if interpreted as a conversation, will gaslight you constantly, while at best sometimes being accidentally correct (but still requiring double-checking with an actual source).
Yes, I am uninterested in having the gaslighting machine installed into every single UI I see in my life.
Even if they were right 9/10 (which is far from certain depending on the topic) and save me a minute or two compared to Google + skim/read-ing a couple websites, it's completely overshadowed by the 1/10 time they calmly and confidently lie about whether tool X supports feature Y and send me on a wild goose chase looking through docs for something that simply does not exist.
In my personal experience the most consistently unreliable questions are those that would be most directly useful for my work, and for my interests/hobbies I'd rather read a quality source myself. Because, well, I enjoy reading! So the value proposition for "LLM as Google/forum/Wikipedia replacement" is very, very weak for me.
Given that this has now been going on for a few years, both are wearing thin.
Like, I’m sorry, but the current crop of bullshit generators are not good. They’re just not. I’m not even convinced they’re improving at this point; if anything the output has become more stylistically offputting, and they’re still just as open to spouting complete nonsense.
Grown adults spamming the web about this new model from Megacorp X, being all giggly about the new PeLiCaN On A BiCyClE being 0.000017% more realistic than the previous version... get a life
No one gives a shit outside of these nerds, all people want is less work and more free time, they don't give a shit about your generated "art", or how fast this new model solved a problem they didn't know existed 12 seconds ago
Nah ah ...
I'll say the same thing another way: customers tell suppliers whether or not they're satisfied. They don't tell me. I tell them if I think the price is worth it. They don't tell me. Argue with me and they'll lose
Will there be a moment where people will leave social networks to get “real content” again? Will that be safe from AI optimization then?
Are we seeing the start of the demise of social media?
This is like a chef being confused why people dont like the shoes he made them. Why did he make hungry people shoes? Certainly not to eat?
I personally also don't have much use for generating images and videos, at least not regularly. I feel like they want us to use AI tools full time, when really we just need to jump in and use them when required, which might be quite infrequently (obviously dependent on circumstance). But who is going to pay the huge cost of having the tools available when you do want them?
So yeah, agreed. Stop making it hard for me to use my tool without accidentally engaging the LLM integration or just flat out forcing it's usage. I don't want that future price hike that comes with LLMs
Isn't it just the second person? If there were just a Generate button/tab without explicitly addressing me and asking/begging I wouldn't mind it.
I guess they just put really tight limits on compute per request which hurts its performance.
No. Go away.
I can generate images that are difficult to use commercially. I can analyze something with AI but I can't confidently use that output in any setting that matters.
For people who are attempting to engage in profitable work then AI is miserably unimpressive. I don't know what planet this guy is living on. Time is money. Flowery emails and off axis summaries can only create a waste of that time.
(sorry for big words, but this is how i feel they are talking to me when the base operating system I am suppose to put absolute trust in my privacy in treats me like an idiot)
If I want MS Windows I'll actively seek it out and use it - otherwise, jog on
If this is not a statement you can make, then Redmond gets to decide what you use, not you
It's possible that Linux or MacOS users might complain about new "AI features" in MS Windows, but more likely it is MS Windows users who are complaining
If it's possible for a computer user to switch between OS, then there is less reason to complain. Those users can make the statement, "If I want to use MS Windows, I will actively seek it out and use it, otherwise [I will not]"
For example,
"I do not like this Microsoft "AI feature" so I'll use MacOS instead" (possible for user to switch OS)
versus
"I do not like this Microsoft "AI feature" so I'll complain to Microsoft via online comment forums" (stuck using WIndows OS, no choice)
If you cannot make the statement "If I want MS Windows, then I'll use it, otherwise I will use something else" then Redmond, not you, makes the decisions on what you will use, including "AI features" you may not like
Because if you use Windows, chances are you have "auutomatic updates" enabled
This allows Redmond to install new software on your computer whenever they like, e.g., "updates"
I did not switch from MS Windows to Linux nor MacOS
I switched to NetBSD which I had originally used on the VAX before Linux existed
I have owned Macs and iPhones. But Apple became a lost cause many years ago
Today I use both Linux and NetBSD
I prefer compiling the software I use myself. I am not a fan of "binary packages"
When I use the term "you" in an HN comment I am not referring to myself
They decide what OS they want to use, and therefore what "features" they will accept
Whereas MS Windows users must accept what Redmond decides they should use
It's almost as if all the focus has been on eliminating the human... for products designed for humans.
Windows 11 is already adware. No wonder people are complaining about more ads.
Then - like now - it seemed that they couldn't understand that what they made was not what their customers wanted.
My local state representatives just attempted this at our latest "town hall meeting" [i.e. to participate: scan the 8.5"x11" QR code, taped upon each chair].
I do not carry a phone, let alone one that scans QR codes... so instead I just provided 300 pound union dude commentary throughout our entire meeting. I definitely participated.
I guess this is kind of similar though. what is promised isnt and likely wont be delivered.
https://mobilegamer.biz/three-years-after-a-fiery-launch-dia...
The core kernel of it always seemed, to me, to be an extension of the Diablo 3 RMT auction house idea - they wanted a recurring revenue source from a franchise where traditionally they were not charging one, and in this case, they squared that circle by appealing to users who were not existing players, and so did not have those norms in mind.
So yes, I agree it's likely not primarily ignorance driving this.
In particular, my guess is that they looked at their estimates for how much they could make off recurring revenue sources in desktop OSes, and their estimates for how the desktop market is changing with more younger users not using them or viewing them as legacy platforms, and decided they should pivot to primarily being a services provider, in much the same way they're aggressively trying to slap the Xbox branding on other things and getting out of the console market as fast as they can run.
Could be wrong, I don't work there, but usually my experience with companies that large making apparent missteps is that their goal isn't the one you think it is, and attempting to extract as much data as they can from desktop users really sounds like what you do when you're trying to squeeze the sponge before you throw it out.
I would assume after 11 LTSC finally EOLs might be the earliest they'd be considering anything more drastic, but I wouldn't speculate whether it'd look like a good idea by then.
It may sound wild, and certainly possible time will prove me wrong, I'm not an oracle, but the ongoing failures in basic functions in Windows seems like they're removing significant investment in it as a reliable platform for general use going forward, and their recent introduction of things like the Xbox handheld running Windows makes me suspect their goal is to constrain where it's still used, and trim how much it costs to maintain that way.
It was "exactly what customers wanted". Microsoft Windows is just as successful....financially speaking.
Now, if I could just get teenagers to pay more money for a magic digital rune, besides extracting all that juicy marketing data from their phone app... Because more money = better corporation.
Here it's hard to understand Microsoft's surprise when almost everything Windows has done for the last ten years was despised by mostly everyone. I was thinking that decision makers knew they were making unpopular moves but did not care since there's no way Windows can lose market share. I assume he must be faking surprise, but I am not sure for what purpose since staying silent and going forward would have had less press. Well I guess bad publicity is still publicity.
Edit: Found the source: https://www.eurogamer.net/maybe-ai-is-a-creative-solution-if...
A good creative will take that as a starting point, apply their skills and vision to it, and give you something that solves your specific problem in a unique way, often far better than whatever you had imagined.
In my experience, if you do a similar expertise with AI, it just gives you a facsimile of the inspiration you initially provided and not much more.
This is not at all my experience but I'm also biased currently making a fair enough living off providing software that does just that, using AI.
In favt I'd say that's the key to enjoyable AI experiences, without strong opinions from the person leveraging, the output is rather bland and corporate.
1. keeping Windows as small and lean as possible, and let it do the things an operating system is for,
2. offering some AI applications that can be installed optionally by the users who want them, i.e. turn their AI applications into external software that can be installed/used or not, like Microsoft Office.
1. Making the system lean means that you'd have to exclude all the ads, all the free tracking you can do to extract more money, all tie-ins with additional Microsoft services you could've done. Getting paid for the product key is just one step of many in the process of wringing their stack for every last droplet of money they can provide. If anything, it's beneficial to Microsoft to make Windows into a singular giant blob that amalgamates every Microsoft offering into one and pushes them as hard as possible. What are those mainstream customers going to do, not use Windows? Though of course, when using a lean system is a requirement for some business customers, MS will also offer a separate minimal version that can only be obtained through business licenses, just to avoid missing out on those few percent of the market.
2. Why make AI features opt-in? That would require your AI offerings to be alluring enough to motivate users to install the AI features on their own, and how many people will realistically want to install Copilot into Notepad or any other psychotic integrations MS came up with? No, you NEED for your investment to have returns, you need AI to succeed, so what you do is put it in the next update, and then progressively keep punching it down customers' throats enough (via pop-ups, colorful buttons, hardware Copilot keys, ads, integrating it into every piece of software - soon enough they'll probably start substituting regular features with AI ones) until it starts looking like the investment is paying off after all and the investors are happy.
If you need AI to succeeed, you better make sure that the users will love the experience, instead of forcing a bad experience upon them.
If they were ever to produce a Windows PowerUser edition, with absolutely no bloat, it would have to be priced like a CAD suite.
My problem with Microsoft is that they won't sell an un-enshittified version of Windows for any price (LTSC notwithstanding; it's not licensable for general use.) Owning our computing experience is that important to them.
The least cynical answer is that for several decades, Microsoft had a monopoly on operating systems, but they no longer do. Many people lead online lives on their phones instead of desktop computers. People in creative professions use Macs. Servers run Linux. Gamers buy consoles. Schools use Chromebooks.
So they feel it's a dead-end to provide an OS that just works and doesn't get in the way. They need an edge, and they think the answer is an OS you talk to, that helps you with homework, that you build a relationship with. They want "Samantha" from Her, I guess.
I don't think this is going to work with the tech we have today, but almost everyone in the AI space is fudging it the same way - "ship it today, make it good tomorrow".
Windows usage is broad enough that the "new users" revenue stream is pretty dry, but it's also hard to say "we increased the price to $X while making windows smaller".. so they will endlessly pump in new "features" and bloat.
I recommend using an operating system that isn't driven by a thirst for revenue.
Let's be fair. Microsoft has not succeeded at mainstream consumer AI products... yet.
To say AI failed at Micrsoft when CoPilot (the real one, for developers) was, last I heard, the most subscribed generative AI tool for software developers is not a failure. It's wild that most developers in a corporate environment pay Microsoft to use Chat GPT, Gemini, and Claude. Their other gen-AI components in Office, like Powerpoint and Word, are pretty darn good. But again, unless you're a corporate user in a work setting, you probably don't care.
This push to lease you your own computer is what hasn't worked very well so far. I dearly hope it pushes more people to Linux (though more likely they'll flee to Mac, which is a more palatable version of the clumsy crap MS is trying to do).
So consumer AI... perhaps that has failed. But the money isn't in you and me paying for a Windows license. The money's in big corporations paying for ten thousand seats at a time for their suites.
It definitely didn't help back when my manager asked me to recommend a subscription to buy everyone on our team that Anthropic didn't offer any plan with a predictable monthly cost for Claude Code with SSO/externally managed billing (I think that changed fairly recently).
Github Copilot for Business with an easily digestible flat monthly rate + straightforward per request rate beyond the quota (for devs who actually ended up using it heavily) made it extremely painless to get approved.
Cursor was really the only other subscription offering that checked all those boxes but our team uses the official Microsoft VS Code extensions and there was 0% chance of getting buy-in if it meant disrupting everyone's workflow for a 6 month trial period.
Tech enthusiast will judge ai based on what it gets right, we’re interested in what “can” do. Everyone else will judge ai based on where it fails, they are interested in what “problems” it “does” solve.
> a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
They see: A computer software generally unreliable and unable to accomplish basic tasks
Also, in my experience, it's the non-tech-enthusiasts who are diving into LLMs because they don't understand what is actually going on and it basically looks like a repeat of the whole thing about ELIZA a few decades ago. Just this time it's vastly more expensive and has to run on a datacentre and can write you an essay instead of just rephrasing your question.
Yeah specifically to your quote: it's very easy to create some images and video. It's very hard to create exactly what you need if you have specific needs.
It's almost as if content creation is hard! Well that's because it is. You need to know the client, understand their needs, make the content in line with their other visual language etc.
What AI makes easier if for things to look professional. But a real professional doesn't just make it look good but also makes it what you need.
Where AI comes in is as a helper, and for those situations where "good enough" suffices. And there are many of those situations. Many of which would not have had the budget for a real pro to do it anyway.
This is where things stop translating well to the real world.
Imagine a pocket calculator:
10 + 33 = 44
Clearly incorrect, then someone tells you
“this one is different, it “helps” you, like 44 is in the ballpark. The real work is now the actual answer”
But the thing where someone dumps a long email thread on me, for it to summarise, yeah.. Or to do some basic web searches for me (these days it's a lot of work weeding through all the horrible clickbait).
But what we were talking about here was content creation. What I could imagine it could help content creators with is stuff like "remove the background from this photo", stuff like that. No more busywork like tracing photos.
And yes I do think LLMs are overused and dumped in many scenarios where they add no value or even detract. But there are usecases where they can add value too. Just not as many as the hype suggests.
give us 100 billion dollars"
Selecting only the cases where something gets something right is nothing to do with what it can do. A random number generator can drive a car if you select only the cases where it does so correctly (and given infinite iterations there will be such cases) but that doesn’t mean it can drive a car in any real sense.
I assume “tech enthusiasts” here means “AI koolaid drinkers”.
There's nothing underwhelming about AI. It's how Microsoft damages anything it touches, and lies to users about it. They force a stupid "copilot" key into computers and encourage the waste of resources into "chips with AI capabilities", only to push your data to the cloud, deceitfully, and with very poor safety guarantees.
Also, people have a Windows backlash in general, and Microsoft ignores it, as usual.
a. Click on a directory in my File Explorer and it opens immediately, it always shows the correct headers, sorting on any column is nearly instant (up until somewhere in XP probably)
b. Where I am now in Windows 10 sorting can take forever and because I haven't re-installed in ages it refuses to remember folder views and will constantly change them to whatever it wants
c. In the future saying
- "Winny, open folder ABC and sort it by DEF please"
- "Folder ABC deleted, except for def.txt"
- "NO, I said open it, not delete it! Get it back!"
- "Error, folder isn't recoverable"