Posted by todsacerdoti 2 days ago
Haven’t used windows in five years or so but I’ve kept hearing bad things. This really is the icing on the cake though. Yea the AI stuff is dumb but if a OS manufacturer can’t be bothered to interact with their own UI libraries to build native UIs something has gone horribly wrong.
But this results in chasing a new paradigm every few years to elicit new excitement from the developers. It'll always be more newsworthy to bring in a new framework than add incremental fixes to the old one.
React has had tremendous success in the web world, so why not try and get those developers more comfortable producing apps for your platform?
(Tangentially, see also the mixed reaction to Mac native apps switching from AppKit to SwiftUI)
I've seen time and again, things like apps rewritten from scratch because nobody knew C++, and they only had C# devs. Or a massive runaround because the last guy on the team who knew C++ wrote a bunch of stuff and left a couple years back, and now nobody really knew how any of that stuff worked.
> React has had tremendous success in the web world, so why not try and get those developers more comfortable producing apps for your platform?
IMO - this is worth talking about. Zune, Windows Phone, and some others died when they did not, in fact, suck, and were pretty good products which while late to the game, could have competed if there had just been a decent app ecosystem.
Their only misstep was making one of their colorways poop brown! That and being too late to market with a phone that used the same design language
There was also the crap that was Windows Media Player 11 which I tried to like for about a month.
There was also the incompatibility with Microsoft’s own DRM ecosystem in PlaysForSure which was full of these subscription music services, some of which were quite popular with the kind of people that were inclined to buy a Zune: folks in Microsoft’s ecosystem that had passed up on using an iPod and used something from SanDisk, Creative, Toshiba or iRiver instead. This is because they wanted to replicate the entire iPod+iTunes model entirely.
The 2006 lineup of iPods was also particularly strong, and included the first aluminum iPod nano’s. When Microsoft announced and released the Zune, they were counter-programming against that, right into the Holiday season with a new brand that had no name ID, with a product that was just like the iPod, couldn’t play any of your music from iTunes or Rhapsody, but with… HD radio.
More than a few missteps were made.
> Their only misstep was making one of their colorways poop brown
i think the other big issue was calling it a 'zune' but thats just me...I mean, iPhone is a really ridiculous name as well if you stop to think about it.
Google was extremely aggressive in muscling Microsoft out. They refused to release a Gmail, YouTube or Maps client for Windows Phone but made sure those services did not work (properly).
And indeed on top of that, Microsoft switched UI frameworks 3 or 4 times. And they left phones behind on the old OS releases repeatedly, that then couldn't run the new frameworks.
Still, Windows Phone its UI concept was really great, and I sorely miss the durability of polycarbonate bodies versus the glass, metal and standard plastic bodies of today.
They basically couldn't stick to a strategy and alienated every potential audience one by one. I was trying to make a Windows Phone app back then and for developers they forced them to go through an extremely difficult series of migrations where some APIs were supported on some versions and others on other versions and they were extremely unhelpful in the process.
They had a great opportunity with low-end phones because Nokia managed to make a very good ~$50 Windows Phone. Microsoft decided there was no money in that after they bought Nokia they immediately wanted to hard pivot to compete head-to-head with Apple with Apple-like prices. They then proceeded to churn through 'flagships' that suffered updates that broke and undermined those flagships shortly after they released thus alienating high end users as well.
Having worked at Microsoft I think the greatest problem with the culture there is that everyone is trying to appeal to a higher up rather than customers, and higher ups don't care because they're doing the same. I think that works out OK when defending incumbency but when battling in a competitive landscape Microsoft has no follow through because most shot callers are focused on their career trajectory over a <5 year time frame.
That was my impression of one of the major problems when I worked there 2008-2011. But I don't think it's just one problem.
Besides, because it's an older company, it might have more organizational entropy, i.e. dysfunctional middle-management. As you say it's probably several other causes too. But still, hard to understand how they can create F#, F*, and Dafny, just to name a few, and fail with their mainstream products.
I thought about this a lot while working at a high-growth company recently.
Decided that regular (quarterly) manager rankings (HR-supported, anonymous) by 2-3 levels of subordinates is the only way to solve this at scale.
The central problem is: assuming a CEO accidentally promoted a bad middle manager, then how do they ever find out?
Most companies (top-down rankings-only) use project success as their primary manager performance signal.
Unfortunately, this has 3 problems: (1) project success doesn't prove a manager isn't bad, (2) above-managers only hear from managers, and (3) it incentivizes managers to hack project success metrics / definitions.
Adding a servant/leader skip-level metric is a critical piece of information on "On, this person is toxic and everyone thinks poorly of them, despite the fact that they say everyone loves them."
Certainly, few companies have managed to avoid this trap. It's largely an unsolved problem.
I've often met managers and execs two levels above me that had a completely delusional view of what was going on below them due to lies spread by middle-management.
I think a few years after I left when more Big Tech opened offices in Seattle, competing companies started paying Bay Area salaries for Seattle living, removing this argument. I haven't watched this closely in recent years.
But fwiw, I was able to save and invest a lot in my Seattle days, despite a salary that was lower than in the bay.
Basically the housing price difference can mean buying a nice house close to your job vs renting a room in a share-apartment.
Best of both worlds is to save in a high-cost area then move to a cheaper area.
Amazon also isn't a restaurant, and while they do sort of sell groceries through Whole Foods and Amazon Fresh, those are again priced locally.
All different business units.
I don't know. I know there are a lot of people who want to work on the OS source code, given the chance, but need some hand holding in the beginning. Companies in general are not willing to give them the chance, because they don't want to hand hold them.
It is my opinion that developer ability is on a Pareto distribution, like the 80 20 rule when 80% of the work is done by 20% of the people. The job market is more liquid for those that are extremely productive so it’s pretty easy to for them to get a pay rise of 30% by switching companies. In the worst case you can often come back with a promotion because, like many companies, Microsoft is more likely to promote you when trying to poach you back. Doing a 2 year stint at Amazon was quite common. The other problem is that when your best people leave is that the process is iterative, not only are you getting paid less but you are now working with people who couldn’t easily switch jobs. You start being surrounded by incompetence. Stack ranking, which I hear is still being done unofficially, also means that you put your promotion and career in danger by joining a highly productive team. So it is rather difficult to get highly productive people to work on the same team.
Being paid less, being surrounded by incompetence, and being forced to engage in constant high stakes politicking really sucks.
Otherwise as you said the only way is to offer the best compensation so that people don't leave. But again those people probably would leave for different reasons (culture e.g.).
And the fact that it's impossible to poach people from companies offering a higher salary than you do. Unless you give them something more, like better conditions, or "mission", or the idea to work on something cool, but I don't think any of those apply to Microsoft.
But the actual issue is that if you underpay people they will not feel respected and valued so they will either not be motivated or leave. So you cannot pay below market, but you do not need to pay FB salaries either.
Also compensation is a sign of respect and influences motivation. If you position yourself lower in the market, there is no reason to deliver top results for less money, correct? This attracts mediocrity, especially in management, and slowly kills companies. Usually there is no way back, no large company can replace the entire management and once and the mediocre ones will reject new, better ones.
Just go watch a few recordings on their YouTube channel.
IIRC .NET was banned from core Windows components after longhorn died, but its been 20 years. .NET is fast now, and C++ is faster still. Externally developed web frameworks shouldn’t be required for Windows.
Microsoft has produced some great technology and when I was last there I was definitely focusing on getting as much of the good stuff out into open source as possible.
Back in the early V8 days the execs imagined JavaScript would keep getting exponentially faster, I tired to explain with a similar investment anything V8 could do dotnet could do better as we had more information available for optimization.
Windows team even refuses to have managed bindings for DirectX, like Apple and Google do on their platforms.
Managed DirectX and XNA were pushed by highly motivated individuals, and lasted only as long as they stayed at Microsoft.
DevDiv is a "here C++ rules!" silo, even the Rust adoption is being widely embraced at Azure, less so on Windows team.
Basically you have tight OS integration vs developer friendly cross platform.
I think Microsoft’s framework chasing has been a betrayal of that philosophy. Internal divisional politics have been major drivers behind fracturing and refusing to unify the stack and its UI approach, and without clear guidance from the Office team the direction of the entire platforms UI is opaque. Short term resume and divisional wins at the expense of the whole ecosystem.
A developer centric platform would let developers easily create platform specific UIs that look modern and normal. As-is the answer to how to ‘hello world’ a windows desktop app is a five hour “well, akshully…” debate that can reasonably resolve to using Meta’s stack. “VB6 for the buttons, C++ for the hard stuff” is a short conversation, at least.
my Windows API knowledge (essentially: just Win32) is still exactly as useful as it was then, having missed the 7 or 8 different UI frameworks in the interim
Since Vista most newer APIs are done in COM, or WinRT nowadays.
I've heard a Microsoft executive talk about win32 as legacy that they want to replace. I don't think that's realistic though, it's probably the last piece of technology keeping people on the platform.
https://learn.microsoft.com/en-us/uwp/win32-and-com/win32-an...
Win32, the C API, is stagnant since Windows XP, other than some ...Ex and ...ExN kind of additions.
As mentioned above, the new APIs are mostly delivered as COM, occasionally with some .NET bindings.
There is still a silo trying to push WinRT now on Win32 side, although given how they made a mess of the developer experience only those with Microsoft salaries care about it.
This oldie shows some of the background,
https://arstechnica.com/features/2012/10/windows-8-and-winrt...
Smells like Microsoft was trying to create APIs based on assumptions versus a 1:1 method that exposes managed code and hides unmanaged.
WinRT can be avoided if you don't do any modern stuff like the new context menu, WinUI, or Windows ML.
UWP as separate subsystem, yes it is deprecated, and no one should be using it, although Microsoft was forced to introduce .NET 9 support on UWP, because many refuse to move away from UWP into WinUI 3.0, given the lack of feature parity.
Now, when WinRT was made to also work on Win32 side, it also brought with it the concept of package identity, which forms a part of the whole app isolation from UWP similar to Android, now on Win32 as well, hence the MSIX.
https://learn.microsoft.com/en-us/windows/win32/secauthz/app...
On the specific case of the context menu, it depends on what is being done,
https://blogs.windows.com/windowsdeveloper/2021/07/19/extend...
You can work around app identity when using the suggested alternative of unpackaged Win32 apps with sparse manifests.
I think MFC is now long-time dead and buried, but at the time I liked it despite the ugly macros.
But anyways, it is not the problem. The problem is just that Microsoft today is doing a terrible job.
The best example, I think, is the control panel. Starting from Windows 8, they changed it. Ok fine, you may like it or hate it, but to be honest, it is not a big deal. The problem is that they never finished their job, more than a decade later! Not all options are available in the new UI, so sometimes the old control panel pops up in a completely different style with many overlaps in features. And every now and then, they shuffle things around in hope that one day, the old control panel won't be needed anymore.
If you make a change, commit to it! If you decide to replace the old control panel, I don't want to see it anymore. It is certainly easier said than done, but if you are a many-billion dollar company and that's your flagship product, you can afford to do hard things!
Using a web engine to build UIs is fine too. As an OS-wide component, a web engine is not that big in terms of resource use, if properly optimized. The problem with Electron is that every app ships with its own engine, so if you have 10 apps, you have 10 engines loaded. But everything uses the same engine, and apps don't abuse it, then the overhead is, I think, almost negligible. It is rare not to have a browser loaded nowadays, so system components can take advantage of this. But again, you need to do it right and it takes skills, effort and resources.
UX was fine in the Windows Forms days, and WPF was a large step forward (responsive layouts, etc...). The problem was after that it all fell apart with Windows 8 and the attempt to switch to Metro, followed by the Windows Store fiasco and attempting to move to a sandboxed application model.
It all comes down to Microsoft's failure to adapt to mobile/tablets in so many ways. Which is kind of hilarious, because they had some really cool tech for the time (the PocketPCs were pretty fun back in the day, before touch came along).
Because web stuff utterly sucks for making UIs on the desktop. Microsoft either doesn't know this (bad sign), or is choosing to use the trendy thing even though it'll make their software worse for customers (a worse sign). Either way it's a very bad look from MS.
sadly I loved azure data studio despite its being afflicted with electron, but it became so bug infested they had to completely abandon it.
I attempted to use WinUI 3, and could not even get PNGs to render the colors correctly, no matter what setting I tried.
Then I gave Tauri a try, and everything worked out of the box, with the PNGs rendering even better than in the Windows Photos app.
Building the UI was much easier, it looked better, and you get to build the "backend" in Rust.
Nothing about this sucked.
How long did it last. Ironically it still gives me the shits because you can't select text on Netflix's front end.
Building a macOS 26 only app in SwiftUI today is a great UX, just as fast as AppKit.
But it takes quite some effort to turn an iOS SwiftUI app into a real macOS experience. Though most macOS optimizations help for iPadOS as well.
I'll take AppKit -> SwiftUI over Win32-> windowsx.h -> VB6 -> MFC -> ATL -> WTL -> WFC -> WinForms -> WPF -> WinRT -> UWP -> WinUI3 -> MAUI.
Even with all that Microsoft still went outside and used React Native for the start menu and Electron for the Visual Studio installer and Visual Studio Code.
It’s been this way for over a decade. The year of the Linux desktop was 2009; the world is only just catching up.
Yeah, that’s a misconfigured system. I bet you can fuck up Linux enough to get a similar experience.
I’ve always been using Windows and the only time I ever had to wait that long was around the Win98 times on slow hardware.
After login, I can instantly use everything on Win 11, and the only delay is a bunch of apps starting (that I chose to start on boot).
And I say this hating everything about Microsoft and Windows. That phone clicked just right with the tile design and overall usability. Of course, MS having pulled the plug, it's basically a DRM brick now.
MeeGo from Nokia was pretty amazing as well and I'm sure it could have launched Linux phones into actual competitors to iOS and Android - if only Microsoft and Elop didn't manage to kill Linux at Nokia.
The game was broken from the start. Microsoft had no chance.
If it wasn’t for 3rd parties sunsetting their apps, there would have been no reason to give it up.
Despite being a highly underpowered dirt cheap phone it was incredibly smooth and fast to use.
Unfortunately it was a big ball of mud in mismanagement.
One can only imagine what the product managers of like .NET think of all this.
At least in Windows 10, there was even still the occasional Windows 3.1 file picker hanging around in the really dusty locations
You know, like KDE Plasma in 2026.
https://www.windowscentral.com/microsoft/windows-11/after-30...
If I type "Visual Std" instead of "Visual Stu" it goes to the Bing results.
Alternatively it shows No results if you disable Bing in the Search settings found in the top right meatballs menu.
I also would expect fuzzy search by default instead of typos sending users to Bing.
When I use the Windows key to open the Start menu I cannot reproduce this, as eg. Win + E opens the Explorer instead of the Start menu.
It does not appear on my machine as if this could possibly happen when opening the Start menu during regular use. Can you reproduce this on your machine?
> orrelates with missing the first letter off
Intended?Still, that shows an issue of using fuzzy search for Bing but not programs. There should be a precedent on local items. A typo is far more likely than a web search, especially when the web search is resulting in the intended application.
Did no one think of that feedback loop? That if the web search is suggesting an installed app that that installed app should be prioritized?
But you don't. So it doesn't.
(I've pinned Visual Studio to the start menu.)
---
Type something in the Start menu
Top right meatballs menu button
'Search settings'
'Let search apps show results' -> Off (or disable only Bing)
---
I don't know about the Home edition.
You can also launch that Settings page by running in powershell:
Start-Process "ms-settings:cortana-windowssearch"
Or just 'Settings' and in the left navigation 'Privacy & Security' -> 'Search'Then I hear that now ctrl alt delete is a webview. Its difficult to believe. Do you have a reference?
how the OS implements what is displayed is irrelevant
windows has all kinds of virtualizations today, it can literally run web views in separate (invisible) VMs for security purposes
I've noticed Microsoft has introduced things like programs hijacking the screen (e.g. first launch of Edge, even if the launch was unintentional) and they have been making it increasingly difficult to make a local account on installation (even in the Pro version). Things like promotions for Xbox whatever popping up while I'm at work also tweak me the wrong way. Of course they don't know I'm at work, which is all the more reason not to do it!
As an operating system, I would rate it as fine. Compared to Linux, it appears to have performance issues in some areas, with file access being the main one I notice. They have made some progress in some areas (improved terminal, winget for software management). Compared to Windows of 20 years ago, the base operating system appears to be much better. But none of that means little when your main goal is for the "operating system" to get out of your way and let you use what matters.
With SwiftUI you’ve been able to pick and choose where to integrate it over the years, it’s not like you had to go whole-hog.
I use both os daily and neither is remotely laggy, looks nice, supports all the hardware and software and I don't have to be surprised or spend hours downloading drivers to make it work.
I've always wondered what things would be like the Microsoft break up went though, I really do think personal computing would be better off and the people involved would probably have even more money to boot
I've worked with all major GUI frameworks, from MFC to Qt, they all suck compared with React/Vue
I remember when people argued that because the time spent running an app was so much greater than the time spent developing it that one should be more conscientious about a user's time than a developer's.
After all, wasting a minute of time from 20 million users is 38 man-years of lost life. Doing that just to save a developer a week or a month is ethically troubling.
Of course, people also upgraded their computers a lot less frequently and you'd publish minimum machine requirements for software which probably made it easier to make such arguments as you'd also lose customers if software was slow or had minimum hardware requirements a lot of people didn't have.
That largely went out the window with web developers where users were just as likely to blame browser makers or their ISP for poor performance. Now with app developers and OS makers doing it, I guess there's just so many users at this point that losing a few with older hardware just doesn't matter.
Every single web or mobile app does his own custom thing nowadays. As a user I couldn't care less how it's implemented, what I want consistency in behavior and style across the board.
It feels like this has been completely lost, even on platforms like mac where consistency used to be important.
I'd take MFC everything over random behavior if I could.
There are two kinds of consistency: across apps within a platform and across platforms within the same app. As someone who uses multiple platforms regularly, I have forever been annoyed when eg keyboard shortcuts change when I switch to a different computer, especially when I’m using the same app.
Apps like Discord, Spotify and VSCode are consistently the most pleasurable to use because they are largely the same.
For a unique piece of hardware like the old iPod, it made more sense to do your special custom UX as a unified product. But we’re talking about general purpose computers. The ”platform” shouldn’t be special imo, it should simply be predictable and stay out of the way. They mostly provide the same thing, like copy paste and maximizing a window, yet have different controls. This differentiation adds no value, at least to me.
Even the most complete “UI frameworks” on the web are full of holes, leaving you to build a patchwork monster out of a laundry list of third party widgets (all of which themselves are full of shortcomings and concessions) or build your own.
As an aside, this gripe isn’t exclusive to the web. It’s a problem with many others such as Windows App SDK (aka WinUI) and Flutter, among others. At least for the things I build, they’re unsuitable at best.
Late millennials and gen Z have been spoiled by declarative, reactive frameworks that work identically whether you're doing a local UI or the Web, and the tools (for example Figma) that have grown up around these frameworks. Using C++, Objective-C, or even Swift will be just fine for a personal project, but if you're talking something that needs to be maintained and refined over the long term by a team, you will have a much worse time finding people competent in those languages than in JavaScript+React+Electron.
This is also one of the reasons behind rewriting everything in Rust: C is so dangerous, people who don't already know it inside and out are unwilling to touch it. Virtually all of the younger system developers are already working in Rust, and would vastly prefer it over working in C given the choice, so keeping a C project maintained has gotten a whole lot harder.
I'd love to know where you get your statistics from.
FYI, as an anecdote, I am 'younger', in the sense of 'only recently joined the workforce', and I write 100+ lines of C and C++ a day, both at work and in side projects. Haven't touched Rust once, although I would like to get into it.
And funnily, the one UI framework I did use at work is Avalonia, which is strongly inspired by Windows Presentation Foundation.
Declarative UI has its upsides, but it’s hardly a panacea. There are places where it’s a straightforward dramatic improvement and then others where it’s an awkward contortion at best. Reactivity can be great but works in imperative setups too.
The explosion of popularity of front end web frameworks comes down almost entirely to two things: accessibility and commodity talent. It has a low bar to entry and JS+React is the closest the industry has come yet to achieving its undying dream of cheap, easily replaceable, interchangeable developers. In most other aspects it’s objectively worse than alternatives.
There was a cross-platform QT tool, running on macOS, Windows, and Linux, for debugging and updating the firmware for an embedded platform solution. macOS & Linux both were quick and fast to code. Windows needed more work and also an abstracted write management system because the application was bringing the OS to screeching halt while writing the debug messages to a SQLite database. The write issue was only on Windows. HTML pages / reports were being saved into the SQLite database and viewable with-in the application. This was all packed into a single file executable so nothing and to be installed, just copied to the computer and ran.
Often low-end hardware is sold in product solutions and frameworks like QT are better suited to make the end user happy with load and response time than HTML5. The only reason I find bloated frameworks being used on such hardware is because the developer only understood one programing language and one UI framework. The former developer who's job I took over jumped ship because he did not want to learn WPF and only knew WinForms.
QT, HTML5, React, WinForms, Gtk ... are all tools in a tool box and each has a proper usage. Hell, if I ever make an iPhone based application I will be learning Swift and the Apple frameworks for such a task.
If you need a lot of graphical elements and customization to get a look and feel that matches what you want, then yeah, nothing really beats html/css/js for both it's flexibility and available ecosystem.
But if what you need is an application with a button that does magic things when you push it, or a text box or table that allows for customization of the text color, then all the other types of UX frameworks work just fine. You just can't expect to do something like make a pretty chart.
Now we are talking about entire apps being built with that stuff, down to the window border (or lack of it). It's impossible to have a consistent looking and working OS with this approach. It's impossible to share code between these things and the actual native apps, and often things have to be written from scratch and end up using 10x memory than the native solution.
Not very useful because you quickly realize you mostly obscure your desktop with actual applications you want to use on your computer.
Why they thought it couldn't be done with the .NET stack they already had (this was after the purchase of Xamarin and Blazor becoming a thing, mind you) still baffles me.
I honestly think that has way less to do with Microsoft, more of a representation of "software engineering" practices these days.
For example, Gnome shell has bunch of javascript in it, GTK has layout and styling defined in some flavour of CSS, etc.
I'm of opinion if you start writing OS userland in either javascript or python (or both), you should be fired on the spot, but I don't make the shots.
Most technical decisions aren't really driven by what makes a better end-user experience or a better product, it's mostly defined by convenience and familiarity of substandard software developers - with mostly and primarily web-slop background.
Compared to Windows it's of course absolutely unreal.
KDE Plasma, which is in my opinion the most advanced desktop environment is written in Qt QML which is JavaScript. There are advantages to that over C++, namely your session won't simply crash.
(While you can use some JavaScript from QML, the application still have a C++ core. QML applications can still crash. There is no DOM with QML, no browser overhead)
The software industry has always had more juniors than seniors so this issue of juniors calling the shots is not a new one but it does feel like it's been getting worse and worse... Now it's basically AI slop vibe coders calling the shots about coding best-practices.
Neglecting the fact that almost everyone else is doing similar things.
> For example, Gnome shell has bunch of javascript in it, GTK has layout and styling defined in some flavour of CSS, etc.
What GTK is doing isn't really any different than how many UI framework work and have done so for quite a while now.
Almost every desktop UI toolkit/library/framework in the past 15-20 years has the following:
- Markup interface for defining the layout. If they don't have that they have a declarative way of defining the UI.
- Some sort of bindings for popular scripting language that hook into native code.
- Some of styling language that isn't that different from CSS.
This has been the norm for quite some time now. It works reasonably well.
Futhermore there isn't much difference between what desktop developers are doing and what web developers are doing.
> I'm of opinion if you start writing OS userland in either javascript or python (or both), you should be fired on the spot, but I don't make the shots.
Why? I find Gnome works really well on Linux. I have a pretty nice desktop environment after adding two extensions (Dash To Dock and App Indicators). Gnome runs well on relatively ancient hardware I own (2011 Dell E6410) with a garbage GPU (it isn't OpenGL 3.3 compliant). It actually performs a lot better than some other DEs that are 100% native.
JavaScript is indeed a slow language. However in Gnome that isn't the bottleneck. People have been making UIs with JScript (basically JavaScript) using WSH back in the 90s on Windows 98.
> Most technical decisions aren't really driven by what makes a better end-user experience or a better product, it's mostly defined by convenience and familiarity of substandard software developers - with mostly and primarily web-slop background.
What makes a better end user experience has nothing to do with any of this. There has to be an incentive to create a good end user experience and there simply isn't in the vast majority of cases.
In many cases it doesn't matter really what the tech behind something is. Most popular programmings and associated frameworks all work reasonably well on machines that are over a decade old. I am running Discord on a 15 year machine dual core laptop processor and it works "ok".
So this sort of complaining about "modern devs" I've been hearing about for almost 20 years now. The issues I've faced with doing quality work has been almost always to do with how projects are (mis)-managed.
> OS manufacturer can’t be bothered to interact with their own UI libraries to build native UIs
i wonder if they ever thought about using copilot to fix that (insert thinking-face)But if they don’t use web tech it would be too expensive to build the start menu in a way that works cross platform!
Oh wait
And that's just one example. I curse Microsoft every day.
Newer version of Windows seem to add latency were there was none before.
Yet somehow I am OK with gnome shell.
I am considering writing software specifically to feed random junk jnto Microsoft's telemetry cloud. I will call it "fusk-MS" and it will send random searches to Bing and fake screenshots of a linux desktop to copilot ten times a second until Microsoft stops acting like such a jerk.
It doesn't help that their own UI libraries are unfinished, unpolished, hot garbage.
I commend on using React, though. Like it or hate it, React is the closest to one true framework for everything.
But no. We cannot have nice things. Microsoft has lost the ability and management capability to release nice things. For some reason, Microsoft is trying to reinvent the wheel with UWP (aka WinUI2) and WinUI3. They are trying to replace everything with these half-arsed libraries when very complete and well-thought, future proof stuff already exists in Windows' DNA. They are shitting on the work of their earlier engineering.
It is inconsequential, until it isn't. In front of me I've got a 2017 lenovo thinkpad running the latest Fedora+KDE, as well as a 2025 HP elitebook running "last corporate-friendly-stable version of W11". I can pop open the lenovo, key in my session password and hit enter, and I'm instantly productive, with shortcuts like meta+E giving me a working file explorer within milliseconds. On the Windows' side, there are several seconds of delay between typing my password and the on-screen feedback. Once finally unlocked, I've got a laggy environment where OS-essentials like the start menu and file explorers take whole seconds to render and respond.
It's a shame, if you ask me, that a dozen-or-so CPU and "general hardware" generations between those two devices got to waste due to poor software engineering and practices. And I'm not even talking about quality/reliability which is another sore point for Windowses of late.
I even ran Windows 10 on Thinkpad x240 a couple of years ago, it also ran fine.
I think this is a real thing and I think a combination of MS demanding everyone get new hardware and Valve really polishing a lot of linux has gone a long way to get non-technical users to start seriously considering linux.
It's a huge added bonus that old hardware simply flies with linux. I have a 5 year old laptop that feels about 10x more responsive since I killed the windows install and put linux on it.
And I know that laptop will continue to fly because, unlike windows, it's never going to get any sort of serious bloatware added on as I update it.
Given how rough and uncertain the economy is, this creates a large group of people who can't or aren't comfortable upgrading their computer, but at the same time don't want to be stuck on EOL Windows 10 forever either.
It’s literally the ads and bloatware. Windows is horrible unless you are technical enough to strategically disable the bloatware, and keep on disabling it as the updates continually reenable it. And if you are technical enough to disable it then Linux isn’t a problem.
Microsoft really is enterprise, cloud, and GitHub / AI tools. Windows for personal users is harvesting as much cash as possible from boomers and gamers, but the gamers are leaving en masse now. Software professionals only use macOS or Linux unless they are a MS shop that has to use Windows stack.
It is an incredible shift for those of us who have been around forever. But it’s a true look at how impossible things shift, bit by bit, until all of a sudden it all washes away. Never believe the tech cos on top today can’t be beat. It can and will happen someday
I hope more companies and MBAs open their eyes to this: that the long term cost of user-hostile changes is negative compared to respecting users and building good products.
Also currently it helps to stand out from the sea of crap products.
Play the long game. Make good products. Bring joy and positive experience to peoples lives. Sleep well at night.
Indeed quarterly earnings make people think short term, disregard long term. Focus on growth above all else
What we’re seeing instead is open-source becoming the real alternative. People used to look for other proprietary tools, but now open-source options are getting good enough, and more people are building personal software that fits their needs instead of bloated do-everything apps.
That’s the shift. Open-source is rising, and I don’t think these companies can reverse course fast enough.
All that to say that I am interested enough in a Linux machine, but don't feel I have the requisite knowledge to drive one.
I would say that’s absolutely the most normal gamer way of playing PC games. As someone who is mostly given up on playing games on a computer and prefer consoles, I’ve thought of doing the same thing.
I agree it’s really impressive that lots of people have decided to try Linux, far more than I remember ever before.
But I’m worried this is “the moment“. Possibly the best shot that’s gonna happen for a long time. And if people find things aren’t as ready as they think from what they hear they’re going to be burned and they’re not coming back. The next time around not only will they not come, they’ll push other people away from trying.
I don’t know if we’ve reached that magical inflection point or not. I think some people are using rosy glasses again though. The real momentum has never been this strong. But it’s not a done deal.
But one of my senses is that the sort of games that really benefited from a desktop system--primarily Windows--like serious simulations and resource allocation games are increasingly fringe.
Certainly there are games on Linux today but I also wonder if a lot of people won't decide, as you say, that consoles are just easier.
Still there are a huge number of games from indies or small publishers that may not make it to console but would still work fantastically with a controller. Or maybe they’re successful and they will make it over, you just don’t want to wait the two years.
Those kind of games are the ones that make me consider getting a Steam Deck.
I fact, Linux is much easier to run on somewhat older hardware because drivers are often a bit slow to land and Ubuntu and its derivatives always lag in kernel versions.
Older hardware becoming more valuable because price hikes doubly benefit Linux.
As I wrote on HN just yesterday, I've been working on the Linux desktop for 20 years and the momentum has never been higher. 2026 will be fun.
Thing is that explicitly asking for money works, it gets results. If you can get people to pay money to watch you screaming at video games on Twitch, you can definitely get people to pay money for working on useful software.
That, plus (what feels like) a lot of recent advances in Linux. When I tried it... 2-3ish years ago? I recall e.g. fractional display scaling being basically nonfunctional. But when I tried again early 2025, it pretty much Just Worked (arguably even better than it did on windows), I just had to manually enable wayland. Pretty sure even that's just the default nowadays.
Which basically sums up my personal windows -> linux pipeline: bought a steam deck, was impressed at how well it ran my steam library; had my old laptop finally die on me, ran my life off the steam deck for a while; decided to eventually build a new machine, and figured I might as well try installing linux from the get-go. Everything worked fine on the first try, and I ended up not even installing windows.
certainly within my friend groups, I'm seeing more and more people entertaining the idea of making the switch as well. Admittedly, that's primarily "tech-savvy" folks though.
Proton was good, but SteamDeck did 2 things:
* informed bigger public that hey, it is good enough for vast majority of games/gamers in the public eye
* more importantly, *made developers care* about their stuff working on Steam Deck. And if it works on Steam Deck, very good chance it will work on <generic linux distro> just fine
I had Dillo for a web browser, a stripped down version of VLC that could play 360p Youtube videos without issue, downloaded via Youtube-DL. I had XMMS which looked just like Winamp, and Sega/Nintendo emulation and even Duke Nukem 3D. For programs I had epub/pdf/djview readers, xpaint which is like classic MS Paint, feh as a hyperlightweight all purpose image viewer and background manager, a super lightweight RSI break popup program, and even a fully functional web server stack. It also had a window manager (JWM) that handled multiple desktops more intuitively and effortlessly than Windows does now.
Good for checking which photo of a dozen is clearest, while zoomed in 800%.
- macOS is kind of crapifying, with Liquid Glass UI, iCloud services pushed down your throat… - Windows 11… - (some) Europeans are getting concerned about their complete lack of sovereignty on the tech stack, and Linux is one way to reclaim a small part of it. - LLM agents like Claude code have lowered the bar so much for any setup operation and bash commands.
All in all, it seems like a good time for Linux to broaden a bit its adoption.
My "year of the Linux desktop" was in 2010, because even then everything was much, much faster on Ubuntu. (It helps major browsers were shipping 64-bit versions for Linux only, but Minecraft simply did not run on my laptop under Windows).
Does anyone else feel kind of sick (something like pity?) when they see people using Windows 11? Right click menus which have a loading spinner, advertisements littered throughout, and headlines from right-wing tabloids spammed in news widgets.
These past six years have been absolutely bonkers incredible for Linux, and it can all be attributed to Microsoft shooting themselves in the head with Windows. Proton work started after Windows 8 and really became usable in late 2019. Now we're seeing something again with Windows 11. It's awesome, hope it sticks.
It can’t all be attributed to Microsoft. There have been huge efforts by many parties to make this happen. Folks working on the Kernel, desktop environments, distros, applications, tooling, advocacy, and more.
I believe people who say they are being pushed away from ms because of disillusionment with windows 11. But there also needs to be someone to pick up the ball after it was dropped — and those people deserve equal if not more credit
Microsoft is one of Valve's direct competitors and Valve is totally dependent on Microsoft. Among the notoriously poorly-received changes in Windows 8, Microsoft also started to clamp down on who can run software. Valve saw the writing on the walls and released their first Steam Machines. But those flopped due to the state of Linux gaming at the time, they started pouring resources into Proton, which had the distinction from WINE in that they would develop Linux-specific patches.
For sure, Valve would have nothing if WINE hadn't already done the bulk of the work, if Vulkan didn't exist, if Linux didn't exist, etc. But there's a world where Microsoft decided not to rock the boat with Windows, and in that world, Linux gamers would almost exclusively be dual booting.
I'd argue that its drips and papercuts all over. Everything is trying to extract rent, and that makes things unreliable enough that even basic users are starting to notice.
Um, can't connect to the Internet? Nope, you can't play a game on your machine, and you may not even be able to log in. Service hiccup? Booted from whatever you were doing because we can't extort your if we leave data on your machine. And, oh, if you have the nerve to complain, you ungrateful serf, we will kickban you with no recourse. etc.
And this is before we even bring the AI bukkake into the picture ...
Now, two in five PCs worldwide are running Windows 10, an unsuppoted OS. What are the user's options? Either buy a new PC, switch to Mac or run Linux.
[1] https://www.notebookcheck.net/2025-could-finally-be-the-year...
When prices are going nuts and the economy is tanking the option that doesn't cost you money starts to look a lot more appealing, and for some the first isn't even an option; they're completely priced out of the new market for the foreseeable future.
I predict a rise in antivirus company share prices.
If Apple do make the rumoured cheap A-series based MacBook, it could be a hit.
[1] : https://betterbird.eu/
MS fucked up
I just bought a laptop that came with Fedora installed. This isn't anything new, but what really blew me away is that everything... just worked. No tinkering. No alternative modules built from source (hopefully with a good DKMS script). Everything... just worked. I'd blocked out a few hours to get everything working in a satisfactory state and... I had nothing to do, really.
And when I say everything I mean EVERYTHING, not just the features that were significant to my own use cases. Mind-blowing, if you think about it.
For a laptop user who likes to game, you’ll definitely encounter some issues based on my experience. Better than it was 2 years ago, but it’s not a seamless experience (laptops!!) that you’d expect from posts like these.
For a Linux savvy user, it’s definitely worth the switch. I haven’t had any ads in months and it’s magical
Things are improving, and we should see this fixed in the next years I assume. This is the good thing about it, Linux will probably be fixing all annoying bugs in the next few years.
I had the same problem on my new Yoga laptop with Fedora and an Intel BE200 Wi-Fi card.
The only exception is when we got a really new batch of Lenovo P1 laptops for work, and the patches likely were not fully merged yet. So as long as you’re not getting the first batch it is generally pretty good.
This is true. I've been using Ubuntu since 2006, but still see issues with
Wifi: Ubuntu 22 didn't work out of the box with a 2014 macbook air
Bluetooth: maddening trying to set "listening" mode instead of headset mode on JBL earphones - it seems to choose randomly every time it connects, and the setting isn't exposed in any UI
Sleep: I don't think I've ever seen sleep/suspend working reliably on a Linux laptop, to the point I don't know the difference between the two. I have one thinkpad which never wakes from sleep, and also never fully shuts down on system shutdown without a long press of the power button.
I accept all this so that I don't have to wait seconds for basic UI things to happen, like switching virtual desktop (osx) and opening the application launcher (windows).
This year I got a T14s Gen6 AMD as a replacement, and it's essentially unusable on Debian-based distros (Ubuntu, Mint), but works fine with Fedora and with Windows.
On Ubuntu and Mint, X just locks up every 80 seconds or so, and I have to hard-reboot it (or switch ttys and restart X). Nothing in syslog, nothing in dmesg, nothing in X.org.log to show what might be going on.
I have never had any issues with any Linux-distro regarding WiFi. Most hardware I have used has been largely compatible even. Maybe I have just been lucky, but it seems there’s millions of us who are really lucky these days.
What has also changed from 2010’s is that the documentation like Arch wiki is a lot better. You can also ask an LLM to help you configure things - obvs the docs are better and safer - so if and when you do have a problem, there’s actually sources to help you fix it.
This has mostly been solved by either putting them in the nonfree repos or just the fact that WiFi hardware vendors aren't using such stuff anymore.
I still remember pulling firmware blobs for my Broadcom cards, then it magically worked fine. It was far from trivial and I think that's what caused a lot of people who tried Linux on laptops in early 2000s to turn away.
I can't remember the last time I tried a distro that didn't just work on a random computer with a random wifi but it has been several years now.
Nvidia cards on the other hand...last year I had to try about 10 distros before I found something that wasn't a huge pain in the ass.
This is why I don't use Windows. Early last year I paid for a copy of "Windows 10", and it didn't support most of the hardware in my laptop. Even plugging in a mouse I had to use keyboard shortcuts to let it load a "driver", and after that it still didn't support the scroll wheel. Wifi didn't work at all, and wired network was painfully slow. It did at least support FHD resolution in 24-bit colour but very slowly.
My audio interface was completely unsupported, my MIDI interfaces were completely unsupported. Eventually I gave up attempting to run it, wiped the laptop again, reinstalled Ubuntu, and went for Bitwig instead of Ableton, and I've had no problems since.
Maybe one day we'll see the year of Windows on the desktop, but this isn't it yet.
Turned me off Fedora completely.
Tried two other distros on the same machine right afterwards with no problems though.
Beside that though, I'm happy to have left Windows behind completely.
I've done more than a handful of major version updates since then, and almost don't bother to backup any more.
Now I have a Thinkpad T440p with a GeForce GT 730M dGPU which NVIDIA no longer provide driver for newer Linux kernels, so I have to use slower nouveau driver.
Ah, something never change.
Back when X was Xfree86 and you were required to create the X configuration without internet.
Now days when the same thing happens, you could just grab the Internet phone located right in front of you and search away. Technology really changed life.
It has improved greatly over the years. When I was using it relatively regularly in the mid-00's it still took a lot of effort to get everything to work.
But long-time users being amazed that buying a brand new linux laptop in 2026 'just works' says a lot about how far behind it is/was. PC's that 'just work' have been available for 40 years. That should be the starting point for any shipping product.
But that's the kind of product they're shipping, because that's the kind of people they're employing, and that's the kind of decisions they're allowed to make. It permeates everything.
Additionally there is no reliable mechanism to do so as doing it through Task Scheduler causes a race condition - will your script be allowed to run and finish before S0 sleep cuts power to it? You can not be sure.
Additionally if you got cornered into making an online account Task Scheduler doesn't even work with that reliably (for task that require privileges like turning off BT on lock and turning it on on unlock) so then you have disable the online account Microsoft manipulated you to make. Of course the failure is silent so you have to discover all that by yourself.
That is a a driver but Windows can also crash during S0 sleep because of its own updater failing to update some random app (like Microsoft Phone w/e that is).
On Linux it's just not an issue. The script runs on events and is guaranteed to finish. Random updates at random times won't happen either.
It was faster to rg to search files, drop into WSL and run find for file name searches. The start menu was laggy, explorer was laggy (open up a folder with a couple dozen OGG files and it won't render for a solid minute). Mystery memory usage from privileged processes I had little control over. Once I realized that the one game I play (Overwatch) ran on Linux I decided to swap back.
I installed Linux Mint earlier this year and I've been extremely happy. The memory consumption is stable and low, and if something is broken I have the control to fix it. It just feels so much less hostile. This is largely possible thanks to the work Steam has done with Proton. The last real barrier is kernel level anti-cheat which prevented me from trying out this years Call of Duty. Oh well!
Fixed via the Everything app - instant search of any file in a nice resizable/sortable table
> if something is broken I have the control to fix it.
Instant search doesn't exist, how do you fix it?
This continuously drives me crazy on Windows and macOS. I am befuddled at the number of times where I'm searching for a top level subdirectory that starts with 'foo' but the search bar spins and spins..
Eventually I get fed up and just sort by name and perform an alphabetical visual search in meat-space.
I'm a SE for 25 years now, sticking with C#. Microsoft always did great tech platforms and left the missing 20% to the developers. Look at the .net framework (the old one), microsoft windows until win11, office until 2025, and even Excel that can't open csv files because the delimeter is a region setting.
On one side I hated this attitude, on the other side it allowed and enabled developers to get their own business running - see jetbrains resharper functionality - visual studio up until 2024 was a mess without it...
I have found Beyond Compare to be very good on Linux, even on large files/directories.
Well said. I wonder what the kernel team thinks about it.
I have zero issues with the platform day in and day out with heavy workloads like Pro Tools and Unreal Engine devkit. Games run without stutter and issue, all my features are snappy, Explorer loads instantly, etc. Even search is performant and gives decent results. I have tweaked a few settings but nothing you can't find in settings menus.
I'm not sure a lot of people having issues with pretty damn stable platform are going to have a better experience in something they have zero familiarity with and isn't exactly going to be intuitive when things go sideways, as they most undoubtedly will.
There is likely too big of a gap in "terminology".
For example, the file explorer startup is so "Instant" that even Microsoft officially added an option to preload the app to fix the delay. But if you don't notice / don't appreciate real instant, then sure, you won't understand the complaints. (or maybe your hardware masks it well enough)
Similarly, if you've never used Everything or better file manager for search, you might get used to the bad search results and call them "decent" since you're not aware how awesome it can be
Win 11 seems fine to me. I do see Copilot appearing everywhere. I don't see ads from MS at all, though- sometimes my vendor driver-management software asks me if I was to extend my warranty. Not Win11 fault, though. Start menu seems fine, phone integration is nice, OS runs very stable (in the very early days of using Linux 20y ago I marveled at how much more stable it was than Win98! That gap is gone now as far as I can tell).
My suspicion: I am paying for M365 (or whatever they call it now) and so they don't advertise it (or anything?) to me. I don't see CandyCrush or other random things added to my machine. All seems OK.
I've read that Win12 will be subscription-based. Maybe I am personally already there. For now, M365 offers me good value- I use MS Office and OneDrive. But if this changes I can see the equation balance shifting and I will then change platforms again.
TMI, I left MacOS because of Gatekeeper and the inability to repair hardware. Before that I left Linux for work interoperability and regressions I saw on my personal mobile hardware. Neither were "bad", really, I have experienced different trade-offs among the three choices I have used. For now, Win 11 is working just fine for me, with no fuss.
> I don't see ads from MS at all
You can only pick one.
I personally run win11 for gaming, android for media consumption and proxmox for homelab and I think all of these systems are fine as is. They serve their purpose well enough.
My prediction is that steamOS (when it is released) will end up being the only mainstream Linux desktop because of its corporate backing. It would be interesting to see desktop Linux mimicking the android ecosystem, where different vendors provide a different skin on top of SteamOS.
Ubuntu, Fedora, SUSE, Pop!, Deepin (and the list goes on) all have corporate backing. Steam is a well-known consumer brand though, so that might make a difference.
I set up a lot of PCs and what has astounded me is how much less work it takes. Unlike with Windows, most of the defaults are fine. I don't have to scour through all the settings after a fresh install. I only need to install half as many apps. I don't have to run powershell scripts to debloat everything. And I don't have to worry about updates undoing all the changes I've made in the future.
---
I had a job interview yesterday, which happened via Google Meet.
Even though I use my desktop Linux workstation and Firefox 99% of the time for everything, my first instinct was to do this interview on a MacBook and Chrome, to avoid surprises and not look unprofessional if something doesn't work, which has happened in the past. Last year, when I was asked to share the screen during a daily, I had to say "um, I'm sorry, Zoom and desktop sharing don't work on my system."
But I thought I'd first do a test on my workstation, just to see if maybe I shouldn't be concerned anymore. I was sceptical.
The ideal scenario was that on my standard GNOME 48 / Wayland / PipeWire desktop I'd be able to use Firefox for this call, and AirPods, a Logitech webcam, and desktop sharing (5K ultrawide scaled at 125%) would just work with no tweaks whatsoever.
And it did!
I've been using Linux on the desktop for over 20 years (on and off, but mostly on) and I know how to hold my Linux systems, but the situation with Bluetooth audio and desktop sharing in previous years has been... spotty. I was less worried about AirPods — I switched to PipeWire ~3 years ago and so I know Linux audio has been rock-solid and pretty much solved already. But desktop sharing used to be hit-or-miss, highly dependent on whether you used X11 or Wayland, further complicated by the use of Flatpaks.
Since my test went well, I did the interview on the desktop machine. It went smoothly, with no surprises.
Therefore, I announce 2025 as the Year of the Linux desktop :)
It had Catalina on it and was completely unusable. Hovering on anything would bring up the spinner which would take a couple of minutes to resolve itself.
I tried reinstalling the OS, which didn't help. The top recommendation was to revert to Mojave.
Finally, after three days of struggle, I gave up and installed Linux Mint.
The difference is absolutely unbelievable. Even heavy applications like LibreOffice and Zoom are snappy.
Apple makes such good hardware. I felt really sad about the state of their software compatibility with older machines.
So, I don't know about the rest of the world, but I know one more person will be using Linux in 2026!
Mac OS X and Aqua wasn't very well received either at launch.
A similar thing happened with the flat design of iOS 7.
Apple's pattern is initially going overboard with a new design and then scaling it back slowly like a sculptor.
I think they're happy with this method, even if things miss at first the big changes usually create a lot of hype and excitement for the masses.
The vast majority of users don't care about the finer things, Apple knows that the nerds can sweat it out until they straighten things out at which point everyone is happy in a hero's journey kind of way.
I just hope this pattern stays true and that this isn't an inflection point.
The blocker for Linux for me as someone who wants some level of reliability has always been fiddling with low level config, but now with Claude Code, low level config appeals!
I do heavily configure applications, but all of these are terminal based now-a-days.