Posted by meetpateltech 9 hours ago
You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
Impressive technology, but that old skool artisanal weirdness of yore only becomes more valuable and nostalgic.
If I'm building out an internal tool for, say, a hospital lawyer to search through malpractice lawsuits, I want my tool to be the most familiar, obvious, least-surprising UI/UX possible. Just stay out of the way and do what it's supposed to do.
The trick is, of course, that the human is still responsible for knowing when homogenous is fine, or when there's real value in the presentation. If you're making a website for, say, a VST plugin for musicians, your site may need to have a little more "pizzazz" to make your product more attractive to the target audience.
The real world analog is this...
The reason people (especially Americans) stay in Marriott property hotels is because they are homogenous. If all I want to do is travel to Phoenix, AZ for work I want to know that the hotel room has the same mattress, desk, TV, customer service, etc. There is real legitimate value to that. So I'll book the Courtyard in Phoenix because I know exactly what I'm going to get.
On the other hand, when I'm traveling the Amalfi Coast in Italy, I want the Airbnb experience. Sure the bed is stiff, there's no A/C, and the 80 year old door frame is hard to close, but there is something magical about it.
A personal example from a few weeks back. My SO booked a hotel for a weekend as a birthday present. We went there, it had a fantastic spa, dinner was delicious, the room great, clean, and so on. Individually designed, well thought out, friendly staff.
Breakfast came around and the coffee was abysmal. Really truly abysmal. What did we do? While eating breakfast we looked for a McDonalds, as we know for sure, that regardless where you are - you will at least find an okay and drinkable coffee at McDonalds. It is not a great coffee. And will never be. But the likelyhood is very low that you will find a shit coffee.
Marriott is basically the same for hotels. Or MotelOne in Germany. It is the power of brand - you get a solid 7 out of ten. And to be honest - when I am traveling for work, this is all I want. I want to know, that I will have a clean room, a bed that is good to sleep in. And the knowledge, that I will likely wake up rested the next day when I have to be at my best for my clients.
The risk of ending in a shit-hole got smaller because nowadays people write their experiences - but on the other hand, having seen how many of my reviews were being deleted by Google, Yelp, TripAdvisor and the likes because some lawyer requested it - I don't give a rat's shit for online reviews.
Good pizza in Italy, goos ramen in Japan, grilled Picanha in Brazil, that's why you go there and want it different/original.
But in software UI this is often overdone. I want the pizzazz in my audio software in what it produces, not in how the UI looks like.
Because it turns out, the type who don’t want fun little differences are exactly the types who will gladly go on a business trip to Phoenix Arizona and stay at a Marriott hotel.
I don't want more pieces of flair in my life, thanks
You generally won't get to know someone well enough to appreciate their unique aspects unless you see them in person at least sometimes, unless that person has the habit of letting their freak flag fly in all circumstances, which has its own downsides.
Now I struggle to even define what an "operating system's standard visual appearance" is. Apple's still the best but not what they used to be on that front even so.
In the early days, if you learned the OS, those usage patterns and skilled transferred to every app on that OS. They all looked roughly the same, shared the same menus, shame shortcuts, same icons, etc. You didn't have to learn how to use Apps x, y, and z. You just had to learn Windows (to an extent).
Then marketing got involved, and then the web, and then suddenly every piece of software had to stand out and look and behave as unique as possible, throwing years of HIG research out the window.
Just today I had the disk usage analyzer (baobab) open and I was navigating inside directories so I want to go up a directory and clicked on the "<-" left arrow in the headerbar, which went "back" a screen, discarding all the work done scanning the filesystem.
If this app had a traditional menubar and a toolbar this wouldn't have happened.
This is a common type of experience I have every time I use a Gnome app. It almost feels like someone deliberately researched how to make desktop apps as counter-intuitive as possible and implemented that as the policy for some reason.
Years ago, I remarked to a friend that I'd spent half of my (computing) life post-high speed Internet, yet almost all my happy memories are from before that. It was the same for him, and we both explored why that was.
The homogeneity of interfaces was actually one of the reasons we came up with on why doing work at a computer is a lot less appealing.
I understand your feelings but it is extremely tipical in human history to keep remembering "the good old times"
But:
I would have still said I enjoyed using computers. And I wouldn't have said "Today's interface sucks" (well, other than my HW not being able to keep up with eye candy...)
I simply don't enjoy using the computer these days. And I do think the interface sucks. Pretty much anything that involves using the web browser sucks - be it a local app or a web app.
Standardized interfaces are as exciting as kettle thermal switches or physical knobs in cars. Useful, probably optimal and will be around for decades to come. Also nobody talks about it, treats it with interest, or pays above market rate to work on it.
The value becomes the architecture of the value of the tool, not the interface. There is still value being generated, but the need for a highly paid UX designer evaporates, and is ultimately replaced by the above.
But there's is "pride" in making tools people actually use without issue
why do we build with right angles, straight lines, regular curves, etc? Why not random angles, crooked lines, etc for style and "excitement"?
Why don't we assemble a furniture set from a random assortment of pieces from flea markets? People sense that that is ugly.
Users don't need to think about how to use them; they are ubiquitous and familiar, and therefore intuitive and automatic.
If every set of stairs (or, worse, if every stair in a set) was radically different, every time you approached some stairs you would have to think carefully about how to use them so you don't fall.
Is the pride not in solving the users' problems?
> nobody talks about it, treats it with interest, or pays above market rate to work on it.
Definitely needs a citation for this one. For so many products the user isn't paying for standout design. They're paying for insight, leverage, velocity, convenience, whatever. The market definitely supports this by paying above market salaries.
Good design can be a useful differentiator but it isn't the only way for a tool or product to "spark joy" and often _fancy_ design (not good design) is used as a crutch for a subpar product.
Correct, they are paying for work done by people in other roles, who's title isn't UI or UX designer. It's on the backend person for velocity, it's for business development for leverage, it's on data scientists for insight, it's on logistics for convenience. Those people will be paid for solving those problems, not for tweaking CSS. My team, who falls into this category of more invisible work, has not hired UI or UX person at all. Which by mathematically speaking by default, is simply below the average rate for that work. Meanwhile Apple will pay easily mid six figures for someone in a more flashy role.
Design is much harder for power user tools compared to consumer. There is far more complexity and the expectation often is users must be trained to even use the tool.
Design only goes so far.
Describe the idea of what you want to do, not the inscrutable steps the application requires to get there.
Why ? Since its so notoriously bad why have there been no attempts to improve it ?
Respectfully disagree.
You should feel pride when you deliver the easiest-to-use system that the hospital lawyer has ever used. When you get them in and out of the system quickly because it's intuitive and has an appropriate architecture.
I disagree completely. The pride should come from the value that is delivered. Specifically, this:
>> Useful, probably optimal and will be around for decades to come.
Is something to be proud of, full stop.
A cold American convenience store may be delivering the fundamental value at American prices, but there's something to be said about that "extra" human or creative element. One might say the same thing about the changing nature of the web over time, less individual CSS chaos and more Facebook aesthetics.
But I really don't need that quirkiness at Home Depot, the DMV or my bank (or Amazon, or government websites, or my banking site). I'm there to purchase some screws, register my car or pick up some checks. I just need a storefront (or a website) that lets me do that as fast and homogenously as possible.
99.9% of stores (and UIs) are the latter, not the former.
Apple/SwiftUI has accentColor for example where you can inject a brand colour. This is subtle but effective for UI differentiation - colour is a design primitive that evokes subconscious pattern recognition and can be more effective than a complicated design framework that forces a larger context switch in the user's mind.
Bootstrap was great for this. You got a clean web interface that was simple, yet didn't have to be completely ugly. Basic and functional. A form to submit POs doesn't have to stand out, be glassy, or have animations. It needs to be easy to parse and stay out of the way.
There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.
Your users will never make it to your no-nonsense backend if your marketing is completely cookie cutter.
Maybe it's true that yellow is just the best, and should be used in 99% of circumstances?
You are right, though. Many products don’t need more than that. But I fear that this will greatly impact design innovation and progress. We might get stuck in the current UI paradigm for a long time.
But I reckon, nobody cares. Just let Claude decide and go with it... Sad state for UX designers / researchers.
Web Components were a bit too slow to take off so the mental model of JSX has stuck with me, even if the ecosystem with hooks and various approaches towards reactive state are in many ways inferior to a problem Smalltalk already solved back in the day.
90+% of attempts at making a truly unique or mind-blowing UI produce a mind-blowingly bad UI. For 0.5 seconds of wow factor, you've added substantial unnecessary friction. Outside of art projects where that wow factor is the point, it really should not be attempted, most certainly not by someone without the appropriate skillset.
The old skool artisanal weirdness was not a purposeful stylistic choice, it was a bunch of people trying to do the best they could with crappy tools. There may be some je ne sais quoi which is lost with the shift to mass adoption, but the reason for the mass adoption of these particular design trends was that they were objectively superior.
And people sometimes overestimate their designs because beauty is subjective, and because all children are beautiful in the eyes of their parent.
Also, there’s a reason why the mass adopted plastic, monobloc, stackable chair design is worldwide common and is studied as a cornerstone of design.
Which is exactly what I want. Do you have any idea how hard it is to get a competent UI?
Why do people celebrate consistency and uniformity in desktop apps, wanting to crucify developers for not following platform idioms and guidelines... and then suddenly want things that are "truly unique" or "mind-blowing" or "artisanal weirdness" when it comes to a web app?
A competent UI with little effort is a godsend.
This is exactly what I want in a UI.
At risk of shifting the goalposts on what I originally said, unique here isn't meant to mean quirky or weird but, simply, something that hasn't been done before, or hasn't been done as effectively.
This is the challenge for B2B startups that are switching to LLM-based development and are trying to offer more than the reselling of cloud compute at a markup with specialised functionality, because AI turns SaaS into a sexy version of MS Access.
The hilarious thing is that I would be willing to bet than in a decade, it's STILL a massive shitshow in enterprise. That's because the problem with enterprise software is not that good design is all that difficult to pull off (it just requires caring!) It's that the people making enterprise software have terrible taste and can't even see (I am convinced) that the thing they built is ugly and hard-to-use.
Generally the issue with enterprise is that its designed to appeal to the stakeholders who will make the purchasing decision, not the person who is actually going to use it. The people making it may have great taste and know damn well what they could do to make it more usable, but if a clean and easy tool doesn't match someone's preconceived notion of what the purchaser thinks the tool ought to look like then it's not going to fly.
Or “2000s aesthetic” for something before Web 2.0 (although you’ll get a generic 2000s aesthetic unless you provide more detail).
I guess post IPO, after the insiders cash in out of lock period its irrelevant.
I can slap something together with Claude over a few evenings to fill a gap on tooling, or I can wrestle with Jira and CI and all that to tie things together with their own integrations.
No thanks, I'll just take the API keys and build on top, to my exact specifications, and the interface will be passable even if it needs a lot of polish. Tailwind has worked wonders for that.
Sure, some prototypes will be spun up more quickly. But if this was a real problem large companies faced it would have been solved in software already.
Good for everybody who isn't a large company then?
So it's competent, for sure, but that is damning it with faint praise.
The shelf-life of unique and mindblowing has reduced to a week (being generous) before it's copied by slop artists looking for a resume booster or funding, and months tops before it's part of training data for everyone. Unless you find it in that small time window everything will seem homogenous.
It could just be a systemic result; unless you deliberately take the lonely road to parts of the internet where other people aren't, you will not see unique and mind blowing things. Which by definition you can't source from a place that has a lot of users, like social media or popular forums.
AI companies: "good news, everyone! We've automated all those steps so they're even easier to generate!"
I think the same thing is happening in physical construction. Ah, I see you've designed a new box with four primary color tones and slightly offset vertical lines to break up the windows.
Obviously a product of its time and laid out similar to how it'd be printed in a magazine (the characters slightly overflowing the borders and such like). Accessibility wasn't a thing back then.
If a different company did that in 2018 you'd be seeing the G-man in corporate memphis, downloading about 500mb of assets, with 178 separate ad trackers in a consent popup, and then you'd be scrolling like mad to get through all sorts of animations that hijack the scrollbar, in order to get to any useful info.
[0] https://www.reddit.com/r/HalfLife/comments/10sx4ve/what_stea...
but does it still exists? Even without AI everyone is utilizating the same css frameworks, same libraries and templates... design is pretty much boring these days. CSS Zen Garden anyone?
In a direction where the AI model basically serves you everything live. No sites, no front end, just databases and model embodying them.
I mean why even code anything in the future where it is cheap and fast enough to just come up with everything each time based on each user need.
I am not saying it’s good but it’s lazy. And if one thing is for certain is that laziness prevails. Some even mistake it for progress.
But then, is human programming language really the most optimal way for an ai to steer the silicon? Some kind of bare AI OS with kernel, drivers and there in the middle a fat specialised asic ai chip to orchestrate everything.
You might just as well bemoan the homogeneity of Windows 95 apps. All those gray buttons in the bottom right of windows.
I think it's because Steve Jobs killed Flash.
This is most every corporate website.
This comment is just a rehash of the increasingly outdated and incorrect assertion that LLMs can't possibly exhibit any creativity -- and it's also incorrect.
If you're yearning for "old skool artisanal weirdness of yore", look up the trend on Twitter a month or two ago of people asking Claude to make YTPs. They ended up very weird and artisanal in a way distinct from how any human would do it.
Look up in an old city, look at the facades of the buildings. They have quirks, uniqueness, it makes the city almost a living thing. Every time we shave off another edge we lose that. Nevermind the fact that shoehorning everything into the same patterns is actually an antipattern and very good paradigms have been invented after the 90s.
It's not perfect, but I'd rather have a bit of a mess than boring emptiness.
Before these tools, when a client wanted a specific section built, we'd spend hours hunting references across the web. The output always ended up feeling like a mesh of 2-3 sites, never fully unique. Then we'd burn more time explaining the intent to the client's designers and devs, usually with multiple rounds because words don't convey layout well.
Now we throw a quick mockup together in Claude or Lovable and send it. The designer gets the idea in 30 seconds instead of a 45-minute call, then pushes it further with their own taste and the client's branding.
It's not replacing designers. Most clients don't know what they want until they see it. These tools collapse that feedback loop from weeks to minutes, so the designer actually spends their time on the parts that need human taste, not on decoding a vague brief.
Except it is. Plenty of places will say this is all good enough and not hire, or even lay off, the UI/UX person. I've seen this firsthand.
This is just a really cool way of building.
I'm impressed. I tried Google Stitch but it was slow and useless. Sad, because Gemini has a pretty good creative flair, ironically enough.
But jeez, is it buggy, slow and unintuitive at times.
Complete shift in google's old engineering culture of high quality - they seem to be shipping quickly in favor of stability
I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
Quality is a different issue, sure.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.
What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".
But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"
I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)
It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.
Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.
Or it lets folks focus. My coding skills have gotten damn rough over the years. But I still like the math. Using AI to build visualizations while I work on the model math with paper and pen is the best of both worlds. I can rapidly model something I’m working on out algebraically and analytically.
Does that mean my R skills are deteriorating? Absolutely. But I think that’s fine. My total skillset’s power is increasing.
Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.
So you should be really interested in how to capture the “vibe” of your music in a visual medium.
But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.
The term you are looking for is 'aesthetic'.
And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.
Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
If only all great works could just be an X post!
What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
Thank you for so succinctly demonstrating the problem with using AI for everything. You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you. Now you quickly and cheaply fill in the parts you don't personally care about with sawdust, and as this becomes normalized you deprive yourself and others from discovering that they care about the design part. You'll ship your thing now, and it'll be fine. The damage is delayed and externalized.
I won't advocate against use of new technology to make yourself more productive, but it's important to at least understand what you're losing.
Or worse, you gave up because you did not have the time to learn the skill or the money to hire somebody. In this case, your dream just died.
If Grok didn't create the fake nudes users were dreaming about but couldn't create with Photoshop,
would my headstone crumble down?
As "intel" dashboards stay a dream,
the Hollywood wind's a howl
As photos are just still
The Kremlin's falling
As Einstein is not wrong
Radio 4 is static
You think most UI/UX designers, or the artists creating slop for content marketing spam factories for the past decades, cared? Some, maybe. Most probably had higher ambitions, but are doing what actually pays their bills.
It's similar to software developers. Most of those being paid to code couldn't care less, they're in there for the fat paycheck; everyone else mostly complains the work is boring or dumb (or worse), but once you have those skills, it makes no economic sense to switch careers (unless, of course, you're into management, or into playing the entrepreneurship roulette).
The paychecks weren’t great. Everyone was offering to pay designers with “exposure”. If they didn’t innately care about the field they would have done something more lucrative.
the parent's point is that it doesn't work that way. The point is self reinforcing. Design is not a thing. it's the earned scars from the process. Fine to disagree but it reinforces the point.
Like, maybe I just want to make an interface to configure my homemade espresso dohickey, do I have to wear a turtleneck and read Christopher Alexander now? I just wanted a couple buttons and some sliders.
We don't all have to be experts in everything, some people just need a means to an end, and that's ok. I won't like the wave of slop that's coming, but the antidote certainly isn't this.
It's true that design theory writing is annoyingly verbose and intangible, but that doesn't make it wrong. Give someone a concrete language spec and they will not really know how it feels to use the language, and even once they do experience its use they will not be able to explain that feeling using the language spec. Invariably the language will tend to become intangible and likely very verbose.
But to answer your question: no, it's of course perfectly serviceable to just copy the interface others have created, and if the needs aren't exactly the same you can just put up with the inevitable discomfort from where the original doesn't translate into the copy.
I'm an engineer who also loves design. I've read a lot of the books (including the one referenced), I know some concepts and terminology, and I understand the general process — but I'll never be a professional designer. My knowledge is limited, and I find most design tools so complex they actually get in the way of problem exploration and creativity.
For people like me, this tools removes the friction which actually prevents me from being more focused on the valuable parts of the design process. I can more easily discover and learn new concepts, and ultimately spend more time being creative and exploring the problem space.
A whiteboard or a wireframing software would be better, because it lets you focus first on the interactive part. And once that’s solved, the visual part is easier.
This speed and variation wins for me. But yes without a designers eye laziness can get lost in slop design too..
To me the value of Gen Ai is an accelerant (not slop factory) for ideation and solutions not a replacement of the human owning the process.. but laziness ususally wins
when people wax philosophical/poetical about what is essentially capital production already i'm always so perplexed - do you not realize that you're not doing art/you're not an artisan? your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
> They’ll be impoverished and confuse output with agency
ironic.
The first time I used Mac OS/X, circa 2004-2005, I was blown away by the design and how they managed to expose the power of the underlying Unix-ish kernel without making it hurt for people who didn't want that experience. My SO couldn't have cared less about Terminal.app, but loved the UI. I also loved the UI and appreciated how they took the time to integrate cli tools with it.
I would say it was a marvelous human experience _for me_.
Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
I work on niche products, so I've never done something as widely appreciated as those examples, but on the products I've worked on, I can easily say that I really enjoy making things that other people want to use, even if it's just an internal tool. I also enjoy getting paid for my labor. I've found that this is often a win-win situation.
Work doesn't have to be exploitive. Products don't have to exploit their users.
Viewing everything through the lens of production and consumption is like viewing the whole world as a big constraint optimization problem: (1) you end up torturing the meaning of words to fit your preconceived ideas, and (2) by doing so you miss hearing what other people are saying.
...
> Work doesn't have to be exploitive. Products don't have to exploit their users.
bruh do people have any idea what they're writing as they write it? you're talking about "work doesn't have to be [exploitative]" in the same breath as Apple which is the third largest market cap company in the world and who's well known for exploiting child labor to produce its products. like has this comment "jumped the shark"?
> Viewing everything through the lens of production and consumption
i don't view everything through any lens - i view work through the lens of work (and therefore production/consumption). i very clearly delineated between this lens and at least one other lens (art).
Ultimately the exploitative pyramid always terminates in a peak, and the guys working up there can for sure be having a hecking great time doing their jobs.
just repeating the same mistake as op: sadness/happiness is completely outside the scope here. these are aspects of a job - "design" explicitly relates to products not art. and wondering about the sadness/happiness of a job is like wondering about the marketability of a piece of art - it's completely besides the point!
1. Good design is innovative 2. Good design makes a product useful 3. Good design is aesthetic 4. Good design makes a product understandable 5. Good design is unobtrusive 6. Good design is honest 7. Good design is long-lasting 8. Good design is thorough down to the last detail 9. Good design is environmentally friendly 10. Good design is as little design as possible
Generative AI just tries to predict based on its training data.
a product can be a piece of art and design can and does in practice often go hand in had with art, practically most designers also other than the utilitarian role practice the artistic one, wether you would want to group art within design as one is a matter of definitions
of course but that's well within the scope of the whole paradigm (as opposed to how it is originally phrased it in relation to a loss of "marvelous human experiences"): if i use a bad tool to solve my customer's problems in an unsatisfactory way then my customers will no longer be my customers (assuming the all knowing guiding hand of the free market). so there's no new observation whatsoever in OP.
Anyways, this is 100% a shot at Figma, but also catching Lovable in the crossfire. If anybody from Anthropic is reading this, if you keep developing this with features in Figma and other design tools, you'll have a major hit on your hands.
Figma is targeted towards designers who create thoughtful design systems and cohesive UIs and who don't code, while this is targeted towards vibe coders who can't design. Two different circles that intersect to some level.
But like you said, if anthropic adds the tools in Figma, only then they can can take customers from Figma IMO.
It probably reduces the tasks which customers might engage an agency using Figma, though. Down the line, creeping onto Figma’s turf absolutely becomes a strategy for Anthropic.
The challenge is that this sets an expectation of what "design" is, de-valuing the former and shifting us culturally towards the latter and a space where "design" is seen as a subjective visual exercise with little intrinsic value.
But for the other 95% of people, being able to just say "ok can you make it look more modern" and have 4 variants in 5 mins, (like me) Figma will lose users like me.
But then again I was never a "designer" – more a builder.
Same here. I work in Claude Code all day long on slightly complex b2b apps, and the builder MVP for what I want to do with Claude.ai, to work on ideas is far simpler.
I just want to be able to create a React artifact prototype on claude.ai, then share it privately with a stakeholder (internal or external) - and allow those users to prompt changes, then see their changes in the artifact.
The bespoke design is not what I am really worried about at this phase. For b2b prototype stuff, claude.ai already does an excellent job with just a bit of project-specific prompting.
Why is this not yet doable? This seems "so simple." Yes, maybe some shared artifact specific git to allow version control, but is my ask really that hard, or unique?
The Anthropic video on that page at 0:53 literally shows them clicking a "knobs" button and adjusting the pixel CSS value.
I know it's not exactly the same ... but it has that functionality to a degree.
I've never paid for a figma seat. A couple of employers have so that I can collaborate with designers in the product, but I don't think this changes that.
In an era where it's cheaper and more common to end up at that undifferentiated state, the ability for companies to make their products go above and beyond it is more valuable, not less.
I see this across the board with AI. It lowers the bar to get to passable, but as slop fills the internet we're already seeing people place more value in good products, good writing, good art, thoughtful code architecture, etc. Everyone and their cousin's uber driver is vibe coding a SaaS startup no one's going to pay for right now.
If you are talking about a consumer product, one of these is not like the others.
You also clearly misread what I said. I didn't say I spent 5 minutes prompting an LLM. I say the ability to get FEEDBACK (a revision) in 5 minutes is amazing. And I stand by that. That allows me to do 20 more revisions and do in a couple of hours what would take two weeks.
You seem to be romanticizing the concept of grunt work – that for something to have value or be of good quality, you have to put in some sort of minimum amount of time on it, and it has to be tedious. It's the same concept that nobody can make a good quality piece of furniture unless they used a hand saw and spoke sweet nothings to the tree before it was cut.
There are ways to do things quicker while preserving quality. I had already left a caveat saying that for the 5% of people that really want to push web design forward, totally, go ahead. But for the rest of us (including those of us who have lived and breathed code and engineering principles for decades), these tools are phenomenal for iterating quickly.
Anyway, the term builder is more about separating the goals from a vanilla "programmer" - even though i've programmed my whole life, it's always been in service of an outcome. And the outcome is almost never "good code for the sake of good code" - it has to serve a real outcome in the real world.
By the way, lots of good designers are also using coding agents now, so you can keep romanticizing grunt work while most of the market moves on.
Perhaps this phrasing is what invited the interpretation you seem to be annoyed with.
There is not much to gain by suggesting everyone is simply bad faith.
I think you like the other person is assuming that 5 minutes = low quality. Instead of thinking "5 mins means you can make 8-10 iterations in an hour" or "5 minutes making the front end look pretty good means I can spend more time on the backend"
There are many good faith ways to interpret this.
No one is assuming the output is strictly low quality from what I can tell. I am personally evaluating the method you provided, which suggested you are championing a sloppy but highly iterative design flow against a seasoned curated suite for defining design. I dont see any reason to assume the other comment was doing anything otherwise.
You made a broad generalized strong claim and were met with the opposing force, which is actually acting from their own understanding of good faith, believe it or not (see how this analysis is void of meaning?).
this overlap has been widening incredibly quickly. lots of designers are now writing code with the help of cursor, claude code, etc.
even if you believe "real designers" wont ever use this product, it's not hard to see how a low barrier-of-entry tool could affect Figams bottom line. slowing down Figma's adoption from the new wave of entry-level designers who dont already have muscle memory would not at all surprise me at all.
Not convinced Figma cares about traditional design craft anymore.
So I helped her look into it and I was shocked to find out that it just a react slop generator, not a Figma file generator. And extremely limited at that, too.
Who is Figma targeting with this exactly? Developers, who are interested in react apps will simply use claude code, and UX designers don't really care for react apps.
These areas obviously tie into engineering very closely, but the thinking that goes into them happens at the design stage, at a lower cost than starting with engineering. AI models suck at getting every facet of this process right, because designers are achieving a balance between branding, usability, standards, taste, and differentiation -- the exact opposite of a model trained to reach for the most average outputs.
Had they not included support for it, where would they be now? I'd wager a critical mass would be screeching to High Heaven for integrations, seeing as a Figma document is effectively a config file that can be translated to real code.
> The folks at Wall Street do not understand
Not entirely but I would use this and not Figma. I am passionate about system design not visual design so I don’t want to waste time in figma.
How many such people does the world need? Probably less than 1,000. Not a very big market for Figma.
But for me, I will never use it again.
He should probably go and let someone else take the reigns.
https://stitch.withgoogle.com/
I'm now pasting all my Stitch output into Claude Design to see what happens.
edit: First impressions are great. It asked me a ton of really great questions about my design aspirations and direction, which were incredibly relevant and insightful. Waiting to see what it makes.
edit2: It did astonishingly well with the first design pass. Really outstanding. This is probably going to be my primary prototyping tool until the Next Best Thing(tm) drops in a few weeks.
They're down 80% over the last year. Ouch.
Figma actually put the work in to make a great product that performs well and offers anything you could imagine to design just about anything you need, with AI integrations and deep manual editing to sweat the details.
- The best design is original, groundbreaking and often counterintuitive. An AI model is incapable of that, it's uninspired, it will absolutely converge to the norm and homogeneity (you see it everywhere now, just scroll on ShowHN and take a look at the UIs) and produce the safest design that appeals to its understanding of the ideal user.
- Good designers will reject this, they prefer to be hands-on and draw from multiple sources of inspiration which is what Figma boards and Canva is good for, also mainly for cross-collaboration. If you've seen how quickly a great design engineer can prototype you'll know that "speed" they advertise in this video is not worth the tradeoff.
- Creatives typically have a very very very high aversion to AI.
- Non-designers will not see a purpose for this tool, basic design can already be done through Claude Code and Claude.ai, I fail to see what this could offer unless they leverage a model that is more creative and unique by default (you can not prompt/context/harness engineer creativity believe me I've tried).
- Design is a lot more than just UI. Tools like this ignore so many other important aspects like: motion, typography, images, weight, whitespace, sound, feel.
Designing a user inteface involves thousands of small decisions. When trading off pros/cons for each of these decisions, in 99% of the cases, the right answer is ‘optimize familiarity.
That’s why Android and iOS look the same, and why the small differences between them are where contention happen.
If you adopt existing patterns, your users would be instantly familiar with your app, and the design will not get in their way.
HOWEVER, that familiarity is only a virtue because someone, once, deviated hard enough that their deviation became the new familiar. AI can only optimise toward the current snapshot of "familiar". It cannot produce the next one. If designers outsource all their thinking to a model even in tactful design we would never have groundbreaking design concepts like "pull to refresh" or the command palette.
That’s not necessarily what happened though. Apple innovated not out of sheer daring but because they also had the best metaphysical paradigm for GUIs that people could also just intuitively grasp. There was a structural correctness to their approach, underlying all the things that we find visually appealing. In the beginning, Google dared and deviated hard from Apple’s design language to establish their own unique identity, but anyone who’s working in the mobile space would Have noticed that Android coalesced into roughly the same patterns over time because of that structural correctness.
Which needs to be done intentionally in context, not homogeneously as a rapid output of a generative tool.
If you want to make a GUI, it should be familiar. Extremely familiar. It shouldn't invent new ways to interact most of the time.
It is well-known that "intuitive" in UX almost always means "what I'm used to". If you're regularly "innovating" in UI design, you may be making the product harder to use, maybe much harder to use.
It certainly isn't unheard of for new ways to interact with computers to be better than the old, but they are usually tied to new physical aspects of our tools: Touchscreens needed new ways to interact, and maybe there's still some room for creativity there, but not much. The mouse obviously required innovative ideas for several years. But, also, the odds of your wacky new idea being the right way to change how people interact with computers are pretty low, unless you're working at FAANG and have a UX research team and budget to test it.
You can get creative in how it looks, but you cannot get creative in how it works.
Innovation comes from the ways people differentiate, without straying too far from the tried-and-true patterns. It's the tiny decisions that situate UI elements and yes, reinvent the wheel sometimes, that can tip users over to whatever you're building because you did it better, or in a way "most" (the average) never thought of.
If people aren't creative in how it works, then really they're all just making the same, boring products, without truly competing against anyone in a meaningful way in the problem space. Visual appeal isn't a sole differentiator.
I guess that kind of thinking got us liquid glass - which everyone hates.
"Good designers will reject this."
^ Famous last words.
I will very likely be wrong on the second point.
And no, it doesn't just add ARIA to everything as is so typical by poor practitioners.
I'm arguing about invention. It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.
I'm also arguing that this homogeneity in design will lead to an atrophy in inventive, unique and original thinking.
What is it about our own architecture that lets us innovate beyond our training distribution?
You’re talking about art, not design.
Not everyone is looking for unique design, 70% of the web is still using Wordpress. I would say majority prefer familiarity and appreciate uniqueness.
I have no idea how everything will play out, but this sounds a lot like the people saying "good programmers will reject this" six months ago.
Quite apart from anything else, it ignores the fact that—particularly within large organisations—designers (and programmers) frequently have very little say in the matter.
If you want to talk in absolutes, I'd say the best design is the one that results in the desired behaviour of your audience.
most of those "breakthroughs" were just constraint hacks. no room for a reload button. no room for another menu.
enterprise buyers don't pay for counterintuitive. they pay so the new hire finds save without training.
Until we have embodied AI's with eyes and hands that provide good enough approximations, the aspect of design bottlenecked on human experience will stay bottlenecked.
Overall after being laid off in January and a 17 year UX Research/Design/Dev career Im starting school in my early 50s to change careers.
I think more expressive UIs are the future but i disagree with this sort of thing being accomplished with a non deterministic tool such as AI generating UIs, you are throwing stability and consistency along with familiarity out the window.
The idea of tools being almost UI-less and composable and modular has been a "dream" since xerox parc or see for example the book "the humane interface" which happens to also ahead of its time outline reasons why such generative interfaces would be a bad idea especially at such a large scale.
AI can potentially relieve some friction with that paradigm but definitely not in that way or even that extent.
This is for non-designers to crank out slop with less effort. They can still be swayed by all the shiny knobs to feel in control.
Even the most deluded AI bulls don't say that AI is even meant to replace the best that humanity has to offer
While Great design breaks the mould, Very Good design is about surfacing the most expected outcomes for any action which reduces friction and lets people get work done. And this generation of Generative tools is very good at identifying the most common/most expected response to a prompt.
I use it all day every day with Claude Code. I sometimes wonder past code if this has had the biggest impact on my day to day productivity, either having to make do with semi-bad looking reports or have a designer design them (which is slow).
Sort of feel sorry for Figma in a way though, given all the "partnerships" (highlighting their MCPs) and case studies they've done with Anthropic and then they release this. I note there isn't a testimonial from them this time.
I'm surprised how poorly Figma have used "AI" in general - given they were the "gold standard" in taking emerging technologies (WASM etc) and making an incredible product. The Figma Make thing was incredibly underwhelming, I managed to extract the system prompt out and it's basically just Gemini 3 Pro with a design prompt. Perhaps the original team has left?
They are extremely exposed imo. While all the UI/UX designers will continue using it for the forseeable, I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc to view and export and do commenting on the files - not core designer usage. I think a lot of this just won't happen on Figma as much.
their seats system has always been brutal it’s extremely easy to have the seats balloon if you’re not careful and if they’re yearly there is only a 30 day window a year where you can cancel them when the banner to do so appears.
Nope. Figma Make first renders an HTML/React app with your design. Then you could convert to a Figma design file if you have a pro plan. Extremely underwhelming.
There's hardly any difference between using Figma and just designing it with Codex and Claude Code. And now, Claude Design seems to get it right.
* Massive token usage, some small tasks burned through $50 of credits and did not offer $50 of value.
* Terrible at logo work. Comically bad. This is something that is "hard" so it could add great value if it could deliver.
* Repeatedly forgot prior feedback - when iterating it would re-implement prior iterations after being told why we didn't want that result which made for a very frustrating UX.
* Prone to adding visual clutter - kept adding extra elements that look "pretty" but add no value to the user.
* Seems better at "pretty" vs user focused / UX.
* Did not do a good job at using my existing design / UI library
* REALLY wanted to start from scratch. Could not be coaxed into designing part of an application, it wanted to redesign the whole thing.
OK but what we really want to know, what's it like when it comes to drawing pelicans riding on bicycles?
Anyone remember Google's social media platform??? Google Plus?
This is a good era to be in! Its the era of product experimentation.
As long as you realize that 90% of the products will not be supported long term if it doesn't contribute to bottom line revenue, then just appreciate it for what it is, a bunch of smart people trying to create useful products.
Just don't be surprised if Anthropic goes the Google route, which is shutting down the majority of the products that are too small / not successful enough to impact their revenue.
Not every Google product release used Google search. Some of them were completely outside of Google's domain.
Keeping the hype alive through to IPO is critical now.
There's no reason to believe Anthropic will stop caring about this product--they're not Google [1] after all.
> It really feels like Anthropic's product area is extremely overextended at this point.
I don't think so. They have one core product: the Claude model; they're enabling different ways of accessing it. Claude Code for developers, Cowork for general business tasks, and chat for consumers.
This is their first graphic design product, but it fits nicely because once you create a prototype, you can hand it over to Claude Code to make the website, mobile app, or whatever.
The advantage Anthropic has is their ecosystem. A Claude user will be way more productive using Design because all of their context is with Claude; other AI tools don't "know you" the way Claude does. Claude already knows your style and your preferences; it's much more likely to create designs you'd like.
When you go to an AI you don’t normally use, you essentially have to start from scratch.