Posted by naves 1 day ago
Things like:
- If you can't respond to a UI event wait until you can
- Menus should be tree structures
- Pressing alt should underline the hotkeys you need to access anything clickable
As well as just basic responsiveness or predictability. A 2000 era windows application may not have been pretty, and may well have several different styles all imitated from office, but at least I knew what everything did and when it was slow at least it did what I expected.
This meant I could start the computer, log in, potentially start and use several applications and only then turn on the screen. Nowadays that has no chance of working because even to log in I need to press enter or click some button (which one depends on how I logged in previously, maybe) before I can even start typing and doing so eats a random amount of keystrokes while the damn log in screen loads to do its one damn job.
We've lost design idioms, which is a huge tax on users everywhere. I've been mad about this for years: https://essays.johnloeber.com/p/4-bring-back-idiomatic-desig...
I personally love dense UIs and have no expectation of doing certain kinds of work on a phone or low-powered device like a chromebook, phone, or bottom-barrel laptop. But if you're a company trying to sell products to a broad user base, you want to try to design in a way that works for those kinds of users because they still might be end-users of your product. And there's a good chance that those platforms may be where someone first evaluates your product (eg from a link shared and accessed on a mobile device) even for the users who do plan on using more powerful desktop devices to do their work.
So instead we get these information poor, incoherent (because it turns out proper cross-platform, cross-user design is much more difficult than just getting something that works cross-platform for all users on its surface) interfaces. I guess I'm writing this just to add, web/mobile have complicated things partially because more than just requiring their own distinct patterns, they each represent a distinct medium that products try to target with the same kind of design. But because they're different mediums, it's like trying to square a circle.
The fact that there are multiple platforms for UIs* is a huge failure of the industry as a whole. Apple, Microsoft and Google could have had a sit down together at any point in the last 20+ years to push some kind of standard, but they decided not to in order to protect their gardens.
*: a standardized UI platform doesn't necessarily mean a standardized platform. Just standardization of UI-related APIs and drawing.
Any system that needs a straightforward UI for kicking things off, stopping them, logging them, and dragging data files into them..... WinForms.
Bugfree, hardened by the test of time, works on Windows X, Y and Z.
Everything else is just consumer silver sprinkles, and involves faffing around with multiple config files and obscure layout issues.
This is one that I hold my devs accountable for. No, I shouldn't have "put it in the spec", because it is the fucking spec.
I would argue that desktop is the platform for power users, and its future depends on them. The keyboard shortcuts, the micro-interactions, the window management -- this stuff is all important when you're using a system for 8+ hours per day.
Yet we risk desktop experiences becoming less useful due to the UI becoming "dumber" as we keep shoehorning websites onto the desktop. Website UI is dumb. It's mouse driven, keyboard is an afterthought. There's no consistency, and you have to re-invent the wheel every time to get the details right (almost never happens).
I think its more like the OS vendors have stopped being operating system vendors, and are now - instead - vendors of eyeballs to advertisers.
The less the user is GUI'ing, the more they are just watching, placid, whatever else is on their screen.
For native apps to survive, they need to not be platform-specific - i.e. web apps, which require a browser and all its responsibilities - but rather cross-platform, reliable, predictable on all platforms - i.e. dissuaded from using native, but rather bespoke, UI frameworks.
This is attainable and there are many great examples of apps which are in fact, old wheels not re-invented, which still work for their particular user market.
I have the most respect for apps I can use on MacOS, Windows, and Linux - with the same hotkey/user experience on all platforms, equitably - and the least respect for apps which 'only run on one of them', since that is of course nonsense in this day and age.
The cognitive load of doing a web app that can do all the things a native app can do, is equivalent to the load required to build a cross-platform app using native frameworks, so ..
Based on my experience, I would be quite reluctant to rely on any non-native cross-platform desktop UI framework that is not web-based. These tend to be either less performant, look outdated or are bug-ridden.
It is (1) performant (C++-based), (2) does not look outdated, and (3) not bug-ridden.
On Linux, Qt apps feel a bit off in GNOME, though you can never satisfy everyone as its the wild west.
I think Qt also suffers from not really being anyone's favourite.
On the one hand, you have web developers who tend to not really appreciate the nuance of the desktop as a platform. They're not going to advocate for Qt, it's not CSS/HTML/JS.
On the other hand, you have native Mac developers who love Apple's toolkits (AppKit, maybe SwiftUI). They're not going to advocate for Qt either.
Lastly, you have native Windows developers who have been burned so many times they don't advocate for anything in life anymore.
Developing UIs without hot reloading is too painful.
- Qt Widgets worked fine, but looked like a piece of software made in 2013;
- QML looks stylish and is a very nice language, but had a lot of weird bugs.
Neither of these are issues I'd run with if I were to make a web app.
No. I want things like keyboard shortcuts to reflect the platform norms of where the app is running (macOS in my case). A shared core is fine, but the UI framework must be native to be acceptable. Ghostty is a "gold standard" there.
This is why most web apps are lowest-common-denominator annoyances that I will not use.
There are plenty of examples of cross-platform UI's surviving the hotkey dance and attaining user satisfaction. There are of course poor examples too, but that's a reflection of care, not effort.
I wonder if they ever stopped to think that power users are the ones that disable telemetry immediately upon install.
Being able to keyboard through menus as standard. Focus being deeply considered and always working as expected.
Compact UI elements -- in the 90s/00s we decided buttons should be about 22px tall. Then suddenly they doubled in size.
Firefox has nothing to differentiate itself from Chrome at this point.
Not only that, but for a time, Firefox seemed to be copying everything Chrome did, maybe as a way to stop the exodus of users. But people who wanted Chrome-y things were already using it, and people who didn't might as well, because Firefox was becoming indistinguishable from it.
God I wish Mozilla would be made great again. It's tragic how mismanaged it is.
Is it mismanaged? Sure, they spend a fair amount on administration. Sure, they spend about 10% on Mozilla Foundation stuff. But they still spend ~2/3 of revenue on software development.
And they're somewhat stuck between a rock and a hard place.
If they try to evolve their current platform, power users bitch. If they don't evolve their current platform, they lose casual users to ad-promoted alternatives (Chrome and Edge).
And they don't really have the money to do a parallel ground-up rewrite.
The most interesting thing I could see on the horizon is building a user-owned browsing agent (in the AI sense), but then they'd get tarred and feathered for chasing AI.
Part of Mozilla's problem is that the browser is already pretty figured out. After tabs and speed and ad blocking, there weren't any killer features.
Almost nobody chose Chrome. Microsoft had to change how defaults were managed because Chrome kept stealing defaults without even a prompt.
People use "the internet", they don't give a fuck about browsers. Firefox only got as high a usage as it did because of an entire decade of no competition, as Internet Explorer 6 sat still and degraded.
Chrome was installed as malware for tens of millions of people. It used identical processes as similar malware. It's insane to me how far out of their way lots of "Tech" people go to rewrite that actual history. I guess it shouldn't be surprising since about a thousand people here probably helped make those installer bundling deals and wrote the default browser hijacking code.
It should be a crime what Google did with Chrome. They dropped Chrome onto unsuspecting users who never even noticed when malware did the exact same thing with a skinned Chromium a couple days later. Microsoft was taken to court for far less.
How was Mozilla supposed to compete with millions of free advertising Google gave itself and literal default hijacking?
Power users are less susceptible to suggestion and therefore less profitable. They have largely moved to OSes that do not interfere with their wishes, allowing them to make their own choices about what they can or can't do/run (Eg. Linux).
On macOS, if you use standard NSResponder chain and menu items properly, you get Cmd+Z undo, text field navigation, menu bar keyboard access, and accessibility basically for free. The framework was designed around the assumption that users would become experts.
Web apps actively fight this. Every Electron app I use has broken Cmd+` (window cycling), inconsistent text selection behavior, and that characteristic 50-100ms input lag that you stop noticing until you switch back to a native app and remember what "responsive" feels like.
The sad irony is that making a power-user-friendly desktop app is actually less work if you go native, because the frameworks already handle the hard parts. Going web means you have to manually reimplement every platform convention, and almost nobody does.
*well, that seems to have been their goal in the past; nowadays it just seems like they've been trying to funnel windows users to their other products and forcing copilot into everything.
I think the world changed. "Power users" in the traditional sense use Linux and BSD now. Microsoft and Apple dropped them when they realized how lucrative it would be to dumb things down and make computers more like cable TV.
The web is not consistent itself. Lots of sites, and most web apps, invent their own UI.
Hopefully /s
I mean... well... responsiveness matters to me too, and I am impressed by such inspired productivity, but... I'm also confused. Why not turn on the screen - the monitor, right?
Now thinking about how gui lag might impact the sight-impaired, tangential as that is...
Anyway the real point is that it's just easier to use something if you don't need constant visual feedback. Being able to use something blind is more than just an accessibility issue it is just better in general.
As someone who saw what impact WPF had on average users running average hardware in the late 2000s to early 2010s, I disagree.
In 2011, my brother was in seminary, using an average Windows Vista-era laptop that he had been given in 2008. When he was home for Christmas in 2011, we were talking about his laptop, and he told me that the Logos Bible software ran sluggishly on that laptop. He said something about how, for reasons unknown to him, the current version of Logos required advanced graphics capabilities (I forget exactly how he phrased it, but he had learned that the slowness had something to do with graphics). Bear in mind, this is software that basically just displays text, presumably with some editing for adding notes and such. At the time, I just bought him another laptop.
A few years later, I happened to read that Logos version 4 was built on WPF. Then, remembering my brother, I found this Logos forum thread:
https://community.logos.com/discussion/6200
This shows that Logos users were discussing the performance of Logos on machines with different graphics hardware. For a program that was all about displaying and editing text, it shouldn't have mattered. WPF had made a bet on then-advanced graphics hardware for reasonable performance, and that was bad for these users. And that's just the one example I know about.
OTOH WPF is today surprisingly strong GUI platform if you just want to get your Windows GUI out there.
It runs really nicely even on low end hardware. All the nice styling and blending techniques now _just work_ even on the most cheap low end laptop.
The fact it's over decade old means all the LLM:s actually know really well how to use it.
So you can just guide your LLM to follow Microsoft best practices on logic development and styling and "just add this button here, this button here, add this styling here" etc.
It's the least annoying GUI development experience I've ever had (as a dev, non-designer).
Of course not portable out of the box (avalonia is then the ticket there).
If you want 3D, you can just plug in OpenTK with OpenGL 3.3. Decades old _but good enough for almost everything_ if you are not writing a high perf game.
Really, WPF plus OpenTK is a really robust and non-surprising development platform that runs from old laptops (eg. T14 Gen 2 per my testing) onwards.
I've been doing a sideproject using WPF and OpenTK - .net works really great - here is a sample video of the whole stack (from adashape.com)
I recall wasting a lot of time staring at decompiled .NET bytecode trying to understand how to work around many problems with it, and it was clear from the decompiler output that WPF's architecture was awful...
But if you read books from the 2000s, there was much discussion about the performance overhead of a VM and garbage collected language; something like WinForms was considered the bloated lazy option.
I’m sure in a few years computers will catch up (IMO they did a while ago actually) and Electron will be normal and some new alternative will be the the bloated option - maybe LLMs generating the UI on the fly à la the abomination Google was showing off recently?
FWIW Apple has made a similar transition recently from the relatively efficient AppKit/UIKit to the bloated dog that is SwiftUI.
Can't find the original blog post about it but here's a couple mentions of it:
- https://www.edandersen.com/p/evernote-has-no-patience-drops-...
- https://www.reddit.com/r/csharp/comments/x0nu7h/comment/im9k...
Fortunately for me, I had mostly switched to Linux by that time already, where it was at the time relatively easy to just enable grey scale AA with full hinting.
In recent years this has gotten worse again with modern software incorrectly assuming everyone has a High DPI monitor. My trick has been to use bitmap fonts with no AA, but that broke in recent versions of electron, where bitmap fonts are now rendered blurry. So I had to stay on an old version of vscode from last year, and I will be looking to switch to another editor (high time anyway for other reasons).
These were finally improved for WPF 4, since Visual Studio 2010 switched to it and had a near riot in the betas due to the poor rendering in the text editor.
[1] - https://arstechnica.com/gadgets/2008/03/the-vista-capable-de...
Display PostScript did not have GPU acceleration, as far as I know.
https://en.wikipedia.org/wiki/Quartz_Compositor?#Quartz_Extr...
It _still_ is not trivial to render high-quality 2D graphics on the GPU.
[1] Maybe I've just been blindly ignorant for 30 years, but as far as I could tell, 'GPU' seemed to emerge as a more Huffman-efficient encoding for the same thing we were calling a 'video card'
In the context of the discussion, the point is that you don’t need high-powered graphics hardware to achieve a fast GUI for most types of applications that WPF would be used for. WPF being slow was due to architectural or implementation choices.
GPU-accelerated GUI usually refers to using the texture mapping capabilities of a 3D accelerator for "2D" GUI work.
https://wiki.preterhuman.net/Apple_Macintosh_Display_Card_8-...
Displaying text is surprisingly hard, as one can easily find out if ones dives into the big rabbit hole of font rendering.
[1] https://faithlife.codes/blog/2019/06/improving-wpf-text-disp...
But they just plain failed to execute well on this idea.
Actually what does "surprisingly smooth" mean? Better than you expected? Or actually smooth?
Apple solved this by treating the design system as the product and letting the framework be invisible. Microsoft has it backwards every time.
I really don't think that's the fundamental issue.
TFA points out, and I agree, that the fundamental issue is political: competing teams across different divisions coming up with different solutions to solve the same problem that are then all released and pushed in a confusing mishmash of messages.
I haven't written a line of code for a Windows desktop app or extension since early 2014, when the picture was already extremely confusing. I have no idea where I'd begin now.
My choice seems to be either a third party option (like Electron, which is an abomination for a "native" Windows app), or something from Microsoft that feels like it's already deprecated (in rhetoric if not in actuality).
It's a million miles from the in the box development experience of even the late zero years where the correct and current approach was still readily apparent and everything you needed to move forward with development was available from the moment you opened Visual Studio.
There's just so much friction nowadays, starting with the mental load of figuring out the most acceptable/least annoying/most likely still to be supported in 5 - 10 years tech to use for solving the problem.
All of people’s modern desktop woes begin and end at the browser. Here’s why: the late 2010’s push into the cloud made JavaScript all-the-rage. A language the creator made in pretty much a weekend coding session.
There naturally is major business incentives powering this. SaaS made things MUCH easier for delivering software.
Fast forward 15 years and MSFT is full in on TypeScript. It’s a disease that starts with MsOffice and percolates to the whole OS (same as what’s happening in copilot).
.Net is actually elegant in many ways. You have PowerShell, VB .Net, C#, F# etc. languages of many paradigms all targeting the same bytecode (and supported by the OS).
And this is being replace by a fun little JavaScript thingy.
Think about it: it transpiles to JavaScript. Even if it’s the most elegant language in the world doesn’t change the fact that it’s a world of bloat.
Stacks on stacks on stacks. And yet people are complaining about .Net? Come on. Lol
To further argue your original point: chrome & electron are the only reason desktop is still around, both Microsoft and Apple tried their very hardest to build a walled garden of GUI frameworks, rejecting the very idea of compatibility, good design, and ease of use, until they were surpassed by the web, and particularly Google, showing that delivering functioning applications to a computer does not require gigantic widget libraries, outdated looks or complicated downloads & install processes, but is in fact nothing more than a bit of standardization and a couple MBs of text.
All this electron & web hate is so incredibly misplaced I don't even know where to begin. Have you tried making a cross platform mac/win native app? I have, its like being catapulted into the stone age, but you're asked to build a skyscraper.
Remember we’re talking about GUIs. Typescript is great for the browser but it should stay there.
Now, JavaScript can be okay for example: Qt Quick/QML it works quite well in the desktop. But that’s purpose-built scripting.
What a laugh, do you want the examples on Apple's side?
Basically it's been Objective-C and Cocoa since around 2000, later on Swift and then also SwiftUI. That's not too bad for 25 years.
And in contrast to MS, you didn't get abandoned when you were sticking to the official frameworks. Quite contrary, you basically got the switches from PowerPC to x86 to ARM almost for free for example.
Apple is not perfect by any means, but in this regard I truly think they are ahead of Microsoft.
The reboot of frameworks based in OpenGL with the Metal rewrite.
And many other things I am not bothering with since all those OS System N releases, A/UX UI framework, Teligent based documents,....
There are just more people encountering them because the developers are concentrated on using one thing.
It’s not perfect, but a compared to Microsoft, calling Apple out for having bugs is a little rich isn't it?
I pose to you, if the Microsoft offerings are so compelling, why are the serious players using 3rd party wrappers like QT and Avalonia?
It’s because the first party offerings are not compelling. They’re a disaster dumpster fire. And buggy.
My point was don't throw stones when having a big glass roof as well.
Apple isn't the perfection you make out to be, also has a rich history of failures, and only did not went bankrupt due to sheer luck of doing the right decision when there were not many remaining to take.
That is not how the decision making for cross-platform works. You choose those alternatives knowing that they are crap in many respects, yet accept the trade offs because you want to save money on dev hours.
Option 1: spend double the effort, embrace Apple's UI
Option 2: do it once, ship faster, make more money.
I'm just saying that in my personal opinion and experience, the ones from Apple have the best yay-to-wtf ratio. Your mileage may vary.
This experience put a major dent in my perception of the "Apple has the most intuitive UI" narrative.
Case in point: The YouTube app for Apple TV. Everything (pausing, playing, changing subtitles) has been done opposite to the standard player found in every other app. You cannot use the main button to pause and resume, for example. Recently they broke swiping. Normally, you swipe the remote to navigate between UI elements such as squares in a grid or in lists with a light touch. It's very fluid, like a touch screen. But the YT app has added severe "inertia" to touch gestures, and you now have to "fling" your finger on the remote trackpad to navigate. Everything feels syrupy and stuck.
YouTube and Amazon's Prime TV app are the two worst apps I've ever used on Apple TV. I believe they both use some non-native UI toolkit that doesn't respect native OS conventions and doesn't even try to. Pretty incredible given the size and budgets of these companies.
Androids do have universal back button at the bottom on the phone or the same swipe gesture if you want but iphones do not.
Sometimes swipe (the direction and position is a guessing game), sometimes and x (right or left ) and the behavior is inconsistent too (back or close)
There are some guidelines but more often than not seems like every app has it's own method and you need to get used to it
You swipe up and remove the application from the stack, all processes of the application is killed.
Background processing has strict limits, and you need permissions to run longer than that, and for some use cases, there are no recourse. OS swaps you out or freezes the app.
If you want an app to work in the background, don't kill it, period. Push notifications are handled by the OS and is not hindered by this.
Think for example reddit, you open a thread, how do you go back?
You open the "reply window, now ho you ho back? Maybe close it directly?
I Android this is all handled by the same function and is often ranked as the most frustrating design choice in IOS
They all are very different applications and have very different designs, yet the arrow is there.
To be honest, I baffled at your question for a second or so, because I never thought about that, yet the method is so universal that I was not thinking about it at all.
Meanwhile when there's an X button or arrow to the left I always know what it's going to do aside from one or two overly creative Android apps.
Why change what works fine? Maybe that's the definition of being too old, can't be bothered to change to new things.
It's not "getting used to", I feel like that gesture is less practical. It involves or using the "circle" to assist on how to use the gesture (creating a black void on the screen that you need to plan your use of the phone around) or having the swipe that 1) is not as reliable in my opinion and 2) can be triggered accidentally
For me is like claiming that touch screens on cars are the future and people are too old to get used to it.
Swipes are of course nice because they allow for the same interactions without taking any screen real-estate. And I have to say it quite consistent across the iOS apps I use.
When I get handed iPhone I have no clue on how to even open an additional tab in Safari and any finger gestures do not do the things what I expect nor there is a lick of indication on how to do something. It's all just a memorized magical incantations at this point. But hey you are familiar with them so it's easy to bash on everyone who is not in yours eco-system.
Meanwhile in MacOS they dumb things down without a fallback.
The only people who appear to make serious attempts at improving the usability of computers are the likes of KDE and other Linux desktop environments. It used to be the way that Linux was the thing you used despite its shortcomings compared to commercial OSs..
Even though Wine exists, Win32 calls can only be made from Win32 programs, not native Linux programs. So a WinForms app using the latest dotnet would need to run the Windows version of dotnet under Wine, and not use the Linux version of dotnet.
Neither are SwiftUI and AppKity.
But for sure they used WineLib.
This comment written before Tahoe
Even if you take away subjective opinions on Liquid Glass, the point is that the core system updates things across the board.
Unless apps have implemented custom drawing, you get a consistent-ish UI (for better or worse) across the system, whereas with windows you are beholden to whatever hodge podge of UI frameworks were chosen at the given time.
It’s still dependent on the OS it runs on AND the SDK it compiles against (not the OS it was was compiled on).
But that is legacy bridging behaviour, and is not compiled into the app. Apple can and do change those with time.
For example apps that compile against macOS 15 are not opted into Liquid Glass when run on macOS 26 but will be once on macOS 27 according to their transition docs.
That doesn’t really negate the OPs point.
As a consumer I prefer Apples approach. If I were an industrial customer relying on old software to operate my machines i would prefer Microsoft’s approach.
1) They abandoned their mobile phone, tablet, and wearable strategy. So, today if you develop a native Windows application, it will only work on desktops and laptops. That is it. It is not attractive for a developer to learn a whole new UI framework just to target a single form factor. And I don't know if there is any solution for this at this point, they shouldn't have completely abandoned those markets.
2) They did not back 1 UI framework for a long time (I mean 10 years+), instead they did significant changes to their UI framework strategy every 3-4 years. It takes a huge time for developers to trust, learn and develop complex and polished apps in a UI framework. Also it takes a long time for a UI framework to become mature. If you change your UI strategy every few years, you will never have complex and polished apps written with it.
To be honest I am not sure if Windows will ever be able to recover in the long term and keep its market share. The only reason it seems to be alive is because enterprise runs on Windows and it is hard to change that.
I feel like an Apple + Google dominance will be more likely in the long term for desktop operating systems. I am not sure if Google will be able to avoid the first mistake I wrote above but they are working on bringing Android to desktop. It is a good idea but it requires at least 10 years of supporting and polishing it despite not getting much traction. But if Google persists, we might be all using MacOS and Android on desktop 20 years from now.
What do I chose with Windows? Who knows. It literally changes every time I look into it.
That's just insane.
It's gotten so bad that probably the right way to do a modern windows desktop app is react native. At least you could predict that it will stay up to date with the ever shifting decisions at MS to create and abandon UI frameworks.
I will boldly claim Windows Forms is more stable than Gtk and Qt. Don't let random teams at Microsoft confuse you because they released yet another unrelated framework that you don't have to use. They are engineer-sirens trying to lure you from the true path. Let them pursue their promotions in peace while we rely on a stable workhorse.
That's the problem.
And what makes matters worse is because of all the shifts, the documentation throughout MS is in just varying states of outdated. For example, this document which recommends using UWP [1] to handle high dpi problems. But of course, UWP (which was the right way to do gui in Win 10) is now defunct for win ui.
[1] https://learn.microsoft.com/en-us/windows/win32/hidpi/high-d...
Btw Linux UI is not by any measure stable. It is the furthest thing from stable.
The problem microsoft has is instead of making "Win32, but with these extentions or these APIs removed", heck even as a separate "framework". What they did instead was "You know what's hot right now? XML. So let's make an XML based UI framework. Actually, it's javascript and css, so let's do that. Actually, people really like electron so let's do that."
That is to say, it is possible and I dare say easy to migrate an application from GTK 3 to GTK 4. It's basically impossible to migrate a WPF app to UWP. You have to rewrite the whole thing.
They had great devices before iOS/Android and then again after. That Lumia phone was awesome. They had one of the best cameras. Their live tiles they had on the phone & desktop OS were really good. Even Windows 8 had a cool CRM app in its infancy that tried to link all your social media & email accounts together.
They killed all of that even with multiple chances to win people over. It seemed they wanted to win the new markets in less than a year.
For as much flack as Google gets for short lived awesome products, Microsoft is right up there. Which is why when they've announced new things like Blazor, MAUI, etc., no one expects them to live long enough to trust their apps on.
I also strongly question their enterprise MOAT when most kids are growing up on Apple & Google devices the past decade. Microsoft seems to lack long term strategy.
They were way too late to make a dent. Ballmer made the mistake when the iPhone came out to not get their ass in gear to compete. Microsoft's first potential real competitor to the iPhone came with Windows Phone 7 at the very end of 2010. The iPhone was announced in January 2007 and they didn't have anything to compete until almost 4 years later. I'm not sure how they could have recovered from that by the time they gave up on Windows Phone/Mobile in 2017. Anyone who worked in mobile sales at that time knew most people who did buy a Windows Phone ended up returning it when they realized none of their apps were there. They could have had apps if they recognized the iPhone's threat earlier and reacted appropriately.
Also worth mentioning that in their time competing for mobile they did a fairly hard reset of the platform 2 more times for Windows Phone 8 and Windows 10 Mobile. Go find what developers who tried to keep up have to say
MS ended up where it was at because there was basically NO upgrade path between the few different GUI frameworks they had. They broke the whole thing in 2002 when they decided .NET was the way.
You had to basically retool your whole GUI for whatever they were pushing at the time. Then they basically abandoned win32 GUI items and put them in mothballs. Then change their minds every other year.
No sane person is going to pick that model of building an application. So the applications kinda stagnated at whatever GUI level they came into being with. No one wanted to touch it. If I am doing that why am I sticking with windows? I can get the same terrible effect on the web/mobile and have a better reach.
Even their flagship application windows is all over the place. If you click on the right thing you can get GUI's that date back to windows95. Or maybe you might get a whitespaced out latest design. It is all over the place. It has been 10 years at this point. They should have that dialed in years ago.
I do not think Google will be able to pay attention long enough to have a stable GUI. Apple maybe. As for MS you can see it from the outside there are several different competing groups all failing at it.
MS needs another 'service pack 2' moment. Where they focus on cleaning up the mess they have. Clean up the GUI. Fix the speed items. Fixup the out of the box experience (should not take 4gig of used memory just to start up). Clean up the mountain of weird bug quirks.
How did MS actually implemented it though? After a few messages the chat is blocked because MS did not choose to walk the extra mile and maybe compact the context so that their product can be actually usable.
Of course OpenAI, Perplexity and others later implemented that properly and its integral part of modern AI chat and I actually ditched Google for the most part. Had Microsoft done it, they might have had a shot in replacing Google and maybe becoming the AI Chat provider. But no, Microsoft can't have a well thought UI to provide a delightful UX.
IMHO it's a culture thing. Lack of cohesion is a result of it, I used to be annoyed by Apple that doesn't allow to ship its own UI libraries together with the app so to support old versions etc. but Apple had it right, thanks to the limitations UI is coherent.
You can just render on a canvas like flutter and KMP. Most end users don't care
Microsoft keeps footgunning things so hard I think even enterprise might be reluctant to go with them moving forward [0]. I don’t have Netcraft numbers in front of me but I doubt things have notably improved even if they do have a strategy shift to enterprise which includes crapping all over Windows for no good reason.
I’m personally glad FOSS is going strong but that’s a complete aside.
[0] We got burned by Azure as I’m sure many other enterprises have, and they did exactly nothing to remedy/compensate the situation, SLAs be damned. At this point our strategy is to move off of reliance on any Microsoft/windows tech. We moved off of ActiveDirectory not too long ago. Bing/Edge/etc honestly who cares.
Any trade-off that favors the enterprise in lieu of the user actually benefits nobody in the long term.
That's an extreme scenario but today's politicians are not very keen into redistribution of wealth or prevention of excessive accumulation of economic power leading to exceeding the power of the state itself. I see nothing preventing that scenario from happening.
‘I wanted a machine to do the dishes for me so I could concentrate on my art, and what I got was a machine to do the art so now I’m the one doing the dishes’
It's 2026. We're running 8+ cores and 32gb ram as standard. We can run super realistic video games at high frame-rates.
Yet on the same machine, resizing a window with rectangles in it is laggy on every platform except macOS (where there's a chance it's not laggy).
Another example is startup time. Time to first frame on screen should be less than 20ms. That doesn't mean time until first content is rendered, but time until _all_ content is rendered (loading dialogs, placeholders, etc are better than nothing but entirely miss the point of being fast).
The second example is why even though I understand why developers pick tauri/electron/webviews/etc I can't get over how fucking slow the startup time is for my own work. None of them could show a blank window in under a second the last time I tried.
They range from old laptops to a Ryzen 7 9800X3D workstation.
Just yesterday a friend's father needed help setting up their second-hand old laptop with an old i5 processor. I slapped KDE and there was no lag to be seen.
Bonus point that Windows and some Linux distros have sane, intuitive window management. Whereas with macOS I keep seeing someone suggesting some arcane combination of steps to do some basic things with replies to the effect of "OMG thank you so much, this needs to be known by more people!!!"
On MacOS, meanwhile, Finder refuses to update any major changes done via CLI operations without a full Finder restart and the search indexing is currently broken, after prior versions of Ventura where stable functionality wise. I am however firm that Liquid Glass is a misstep and more made by the Figma crowd then actual UX experts. It is supposed to look good in static screenshots rather than have any UX advantage or purpose compared to e.g skeuomorphism.
If I may be a bit snarky, I’d advise anyone who does not see the window corner inconsistencies on current MacOS or the appealing lag on Windows 11 to seek an Ophthalmologist right away…
KDE and Gnome are the only projects that are still purely UX focused, though preferences can make one far more appealing than the other.
But most nontrivial apps can't re-layout at 60fps (or 30fps even).
They either solve it by (A) allowing the window to resize faster than the content, leaving coloured bars when enlarging [electron], or (B) stuttering or dropping frames when resizing.
A pleasant exception to this I've noticed is GTK4/Adwaita on GNOME. Nautilus, for me at least, resizes at 60fps, even when in a folder of thumbnails.
On the Mac side, AppKit, especially with manual `layoutSubviews` math easily hits 60fps too. Yes it was more complex, but you had to do it and it was FAST.
I am also baffled by the multiple control points. I can log in to mail in 3 places. Settings have 3 with different uis....it is gross.
I never understood.NET's purpose. What problem it exactly went out to solve? Did Microsoft want developers to be able to run their applications everywhere too? Absolutely not.
Sidenote - MFC is the ugliest thing you'll see. Yet they didn't mention another piece of work called ATL. Active Template Library.
WinForms were really decent and that was enough. Keep Win32 API and a managed wrapper around it as WinForms and that would have been more than enough.
.
Targeting the broadest possible variant of x86-64 limits you to SSE2, which is really not very capable outside of fairly basic float32 linear algebra. Great for video games, but not much else.
Also keep in mind that .NET originated right at the cusp of x86-64, which again is a whole different architecture from its 32-bit predecessor. Most native apps used to ship separate binaries for years.
And of course, I think Microsoft was aware of their intrinsic dependency on other companies, especially Intel. I can see how the promise of independence was and is enticing. They also weren't interested in another dependency on Sun/Oracle/whoever maintains Java at the moment. While Windows on ARM64 is still in a weird spot, things like .NET are useful in that transition.
Lastly, the CLR is different from the JVM in a number of interesting ways. The desktop experience with the JVM is not great, and Java is a very bad language. It makes sense to do your own thing if you're Microsoft or Apple.
Additionally, such applications that want to exploit certain underlying processor's instruction set have no way to do so without detecting CPUID and landing into so called "unmanaged code" because .NET is all about very high level IR that even has object oriented features as well.
This can have a huge effect on a wide range of applications, not just those using particular CPU features. For example, each libc implementation typically has a separate implementation `memcpy()` for each set of CPU features.
https://devblogs.microsoft.com/dotnet/performance-improvemen...
So .. initially it was "Microsoft Java", a managed language with tight integration into the Windows APIs, and non-portable. That was .NET Framework. A while ago they realized that even Microsoft didn't want to be tied to one platform, and moved to the cross-platform ".NET Core". It now occupies a similar role to Java but is IMO nicer.
Java. Java is the problem .NET attempted to solve.
It means "To obsolete a unique feature in third-party software by introducing a similar or identical feature to the OS or a first-party program/app." The term stems from Apple's 2002 release of Sherlock 3, which made a popular third-party app named "Watson" irrelevant.
Ugh that brings back bad memories. I remember it was supposed to be the answer to MFC. I did an internship where my boss wanted me to use it. It was very painful because it had basically no documentation at all.
I know it’s not a popular opinion, and I am sure there were reasons Microsoft abandoned it, but that was a brief few years when I actually enjoyed building GUIs on Windows.
EDIT: just dug out a "memory magic" winforms app I wrote sometime in the early 2000's and ran it no problem, no weird looking non-native UI or long electron startup...