Posted by claxo 9 hours ago
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)
I understand it was just an example, but you'd be surprised how far browsers have come along with technologies like Web Assembly and WebGL. Forget audio editing, you can even do video editing - without uploading any files to the remote server[1]. All the processing is done locally, within your browser.
And if you thought that was impressive, wait till you find out that you can can even boot the whole Linux kernel in your browser using a VM written in WASM[2]!
But I do agree with your points about lack of feature stability. I too prefer native apps just for the record (but for me, the main selling points are low RAM/CPU/disk requirements and keyboard friendliness).
And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
It's the issue of friction. Also, good webapps are often _better_ than native apps, as they can support tabs.
> And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Because it relied on Java and SWING, which were a disaster for desktop apps.
I grew up reading his writings and learned pretty quickly to read them as "this is what I'm thinking right now in my life" even though they're written more as authoritative and decisive writings from an expert. Over time he's gone from SEO expert to $30K/week consulting expert to desktop app expert to indie SaaS expert to recruiting industry expert to working for Strip Atlas. It was fun to read his writings at each point, but after so many changes I realized it was better to read it as a blog of ongoing learnings and opinions, not necessarily as retrospective wisdom shared from years of experience on the topic even if that's what the writing style conveys.
So I agree that the advice in the post should be taken entirely in context of pursuing the specific goals he was pursuing at the time. The less your goals happen to align, the less relevant the advice becomes.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
- CAD / ECAD
- Artist/photos
- Musician software. Composing, DAW etc
- Scientific software of all domains, drug design etcDesktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
If development ends at a git push and users are left to build/fend for themselves (granted this is a lot of open source), then yeah not much difference, but if you're building and packaging it up for users (which you will more likely to be doing if your project is an app specifically) then the difference is massive.
Times have changed quite a bit from nearly 20 years ago.
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
The impact on people's time, money and on the environment are proportional.
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
You've reminded me of the XKCD comic about standards: https://xkcd.com/927/
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
Good! It's not for them! They can stay paypigs on subscription because they can't git gud!
If your product targets a segment that expects a desktop app, do that. Web app, do that. Phone app, do that.
Something like this would have worked if it was still back in the Walmart bargain software shelf where people could impulse buy a CD, put it into their computer and have it automatically start and install, then show up on the desktop. Despite that being less common now, it was more streamlined in a way for many users.
Many of those people probably aren't logged into Steam or Windows Store either, so you have to do your own thing. It makes sense that web is the least friction for those people.
1-4. Google, find, read... this is the same for web apps. 2. Click download and wait a few seconds. Not enough time to give up because native apps are small. Heavy JS web apps might load for longer than that. 3. Click on the executable that the browser pops up in front of you. No closing the browser or looking for your downloads folder. It's right there! 3.5. You probably don't need an installer and it definitely doesn't need a multi-step wizard. Maybe a big "install" button with a smaller "advanced options". 3.6. Your installer (if you even have it) autostarts the program after finishing 4. The user uses it and is happy. 5. Some time later, the program prompts the user to pay, potentially taking them directly onto the payment form either in-app or by opening it in a browser. 6. They enter their details and pay.
That's one step more than a web app, but also a much bigger chance the user will come back to pay (you can literally send them a popup, you're a native app!).
I wonder whether Google, in its Don't Be Evil era, ever considered what they should do about software piracy, and what they decided.
I'd guess they would've decided to either discourage piracy, or at least not encourage it.
In the screenshot, the Google search query doesn't say anything about wanting to pirate, yet Google is suggesting piracy, a la entrapment.
(Though other history about that user may suggest a software piracy tendency, but still, Google knows what piracy seeking looks like, and they special-case all sorts of other topics.)
Is the ethics practice to wait to be sued or told by a regulator to stop doing something?
Or maybe they anticipate costs and competition for how they operate, and lobby for the regulation they want, so all they have to do is be compliant with it, and be let off the hook for lawsuits?
It is plundering those who didn't pay you for legal immunity.
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
Tech industry culture today is pretty much finance bro culture, plus a couple decades of domain-specific conditioning for abuse.
But at the time Google started, even the newly-arrived gold rush people didn't think like that.
And the more experienced people often had been brought up in altruistic Internet culture: they wanted to bring the goodness to everyone, and were aware of some abuse threats by extrapolating from non-Internet society.
And if it were the altruistic Internet people they hired, the slogan/mantra could be seen as a reminder to check your ego/ambition/enthusiasm, as well as a shorthand for communicating when you were doing that, and that would be respected by everyone because it had been blessed from the top as a Prime Directive.
Today, if a tech company says they aspire not to be evil: (1) they almost certainly don't mean it, in the current culture and investment environment, or they wouldn't have gotten money from VCs (who invest in people motivated like themselves); (2) most of their hires won't believe it, except perhaps new grads who probably haven't thought much about it; and (3) nobody will follow through on it (e.g., witness how almost all OpenAI employees literally signed to enable the big-money finance-bro coup of supposedly a public interest non-profit).
For example, my impression at the time was that people thought that Google would be a responsible steward of Usenet archives:
https://en.wikipedia.org/wiki/Henry_Spencer#Preserving_Usene...
FWIW, it absolutely was believable to me at the time that another Internet person would do a company consistent with what I saw as the dominant (pre-gold-rush) Internet culture.
For example of a personality familiar to more people on HN, one might have trusted that Aaron Swartz was being genuine, if he said he wanted to do a company that wouldn't be evil.
(I had actually proposed a similar corporate rule to a prospective co-founder, at a time when Google might've still been hosted at Stanford. Though the co-founder was new to Internet, and didn't have the same thinking.)
I also remember people citing performance as a reason YouTube switched from Flash to HTML5. Searching those blogs now is giving a lot of 404s. Like I said this should've helped since it's video, but somehow YouTube immediately got slower anyway back then. Back then I installed an extension to force it to use QuickTime Player for that reason.
The proprietary and insecure parts were real problems too. I'm fine with the decisions that were made, but this was a drawback.
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
Your employer most likely has.
I'd wager there are more people paying for software for their smart phone than any other platform they use.
I'm done making web apps (2026).
seriously desktop apps kinda own i just desktop-app'd a pwa made it do SSO auth at my org and now its just part of the self-serve application download kiosk and we're laughing at all the pain we've endured for so many years writing up proposals and billing to scale up web app infra for internal tooling and stuff.
im kinda enjoying coming back to earth right now with my team and we're just hmmmmmmm'ing a lot of things like this. we've had devops chasing 23498234892% availability with k8s and load balancers and all this stuff and we're now assessing how much of that cruft was completely unnecessary and made everything some amorphous blob of complexity and unpredictable billing & and really gave devops a moat to just say "no" to so many things that came through the pipeline. there's so many things that can just be dragged back to like an actual on premise machine and served up through the internal network. we are... amused at how self-important we made ourselves out to be this past decade.
we're probably like days worth of goofing away from going to buy a few mac minis and plug it into some uninterruptable power supplies and just seeing how un-serious we can get with so much tooling we've built over the years. and for everything else, desktop apps. seriously desktop apps is like free infrastructure if you build it right.