Posted by retskrad 2 days ago
What's more, I would love something akin to my current Galaxy a52s 5G with a display around 5.2-5.5" (I first had LG G2, then OnePlus3 which was already a bit bulky and now a52 as compromise; https://www.gsmarena.com/compare.php3?idPhone1=5543&idPhone2...)...
I do have iPhone SE (2022) and it's the size of LG G2 and I find quite handy. Something of that size but with slightly bigger screen (better screen-to-body ratio). Specs doesn't have super-hiper-premium… and the price should be sane (usually compact phones are like 20-40% higher, sic)
When I was ready to buy a new phone, there were no iPhone Mini models for sale. It took more than a year, but I finally found an iPhone 13 Mini in stock on the Apple Refurbished store. Now I'm hoping to keep this phone alive until they finally release another small iPhone.
In my friends circle, iPhone Minis are the most popular smartphone model, it’s even peculiar how pronounced it is.
But they all bought iPhone Minis 2 or 3 years after they came out.
I’d like to see iPhone sales numbers per generation, over multiple years. Like „how did iPhone 12 models (minis, normal, pro…) sell in 2020, 2021, 2022, etc.
The iPhone mini was a billion dollar device. Anyone other than Apple would have called that a success.
You think the pace of innovation has stalled and the tech industry is less diverse than it was 20-50 years ago? Really?
Having cut my teeth on the tech of the late-seventies, that’s not a perspective I share. I have long been impressed with how fast new tech makes it into our grubby little hands. From my perspective, if we were stuck at the pace of innovation present during my early days, well, I think it’s not unlikely we’d still be using feature-phones and WAP gateways so we could tinker with that new mobile-internet stuff everyone’s raving about.
Did you know that factories make less big-sized and small-sized shoes than average-sized ones? Because (surprise) buyers size distribution is not uniform.
“We tried to make big shoes many times and it doesn’t sell well enough”. Oh, really. I guess I’ll just cut holes for my fingers then.
Lol
Now that I'm ready to buy a new iPhone, the Mini has been discontinued! I think the Mini would have been and in fact was successful, but it's not successful "enough" to justify a separate model – they must have observed that people "like me" would still buy a flagship iPhone, even though we aren't 100% satisfied with the form factor.
Apple would rather have us buying a higher-margin flagship model and have an NPS of 65+ than a lower-margin mini model with an NPS of 80+.
My friend with similar instincts as me recently got a refurbished 13 Mini instead of the latest flagship. I'll probably get the flagship, because I value the satcom a little bit more than the form factor.
They should sell small phones. Because the whole family will be in on the brand, features, and services. I’ve heard many family members and friends say that they won’t give up their older small phones because Apple no longer makes new ones.
The idea that they don’t sell well is not a good enough reason when you are trying to capture the whole market for not just hardware but the lock in for services, apps, games, music, etc.
Still, I would like to see a smaller regularly updated phone. Bonus points if there is a high-end version, because small shouldn't mean budget (like with iPhone SE).
Other than that and the camera, the only functional difference I can find are that the SE line is still missing the UWB antenna.
I'd happily upgrade to a newer small iPhone if they made one. As it is, it looks like the only option is repeatedly repairing my 13 mini (and dealing with the hilariously bad 5G battery drain forever) or downgrading to a newer SE3.
I know there are a lot of people in this boat. I predict they'll produce another small phone in a few years. It'll sell well due to pent up demand, and someone will be declared a genius for selling 100M's of extra phones that year.
I’d wish this too. I’m afraid that Apple over the next few year would become more risk averse then ever before. Also old Execs are leaving and retiring - so less people with hands on experience how to start new products VS keeping lights on.
I also would love a smaller flagship spec'd phone.
I know it wont be popular, but I still want it.
There's a congregation of people on HN who do, I suspect they're also the type of people who'll run their phones for 5+ years where their bigger phone buying brethren are replacing every year or two at most.
The market for the "small" smartphone just isn't profitable enough to bother with for most manufacturers.
I would love the new features (especially the camera and sat comm) but I’m not willing to get a bigger form factor device.
I literally was in the Apple store yesterday for the same purpose (with a 12 mini). I'd also love the new features and hardware, but after trying all the available sizes in the store, they're all too big.
My wife on the other hand(s), loves to have a phone she needs to hold with two hands to even be able to use, so obviously she has the Pro Max. I don't understand how people are OK with that, but to each and their own...
Part and kit: https://www.ifixit.com/products/iphone-12-mini-replacement-b...
Steps: https://www.ifixit.com/Guide/iPhone+12+mini+Battery+Replacem...
I've read online and heard from Apple Store representative that iPhone 12 (all models) has tendency to crack the screen when phone is opened for repair or battery replacement and in that case Apple Store would replace the hole phone (this is were multi day repair process). So I would rather pay $90 to Apple that guarantees that I'll get a phone replacement in case when screen is broken during battery replacement. Without the phone I sill would be able to answer the cell calls from Apple Watch and with ipad over WiFi.
They didn't try to upsell me at all, but I ended up getting an SE3 anyway (I didn't realize there even was a newer SE).
Most likely it took them a week to get a new battery for replacement shipped.
Some of the things coming in iOS like notification summaries and similar features are big examples. It's clearly LLM based but it's not a lot of the shoving AI needlessly into things that we are seeing now and provides a true improvement given the notification overload that we have right now.
You can see the wisdom of how Apple is approaching this. In particular, to be on device whenever possible so as not to be dependent on network bandwidth, and to tie features to new hardware (to drive sales).
Normies hate the AI hypetrain bullshit.
Or maybe they don’t, but I somehow see people using those features all the time.
From personal experience, the only thing that changed when replacing the "old" Google Assistant with the Gemini-powered one on my Pixel was that it's no longer able to create reminders.
Literally every thing I used it for got answered via "I cannot do that yet" after it randomly opted me into that. Pure garbage.
I'm not, I pretty much just accepted that Google doesn't care about usability whatsoever and haven't prompted it in a very long time.
To be clear, the only time I've ever used it was via "ok Google" in contexts in which I'm unable to interface with the phone directly, i.e while driving. If it doesn't work you'll learn that you can't start driving before queueing the navigation anymore. The voice assistant was a nice feature, but not important enough to waste my time trying to figure out which feature they opted me into and how to get back out of it.
That said, at some point it started working better, but there was a good 6-12 months where it was a tire fire.
Apple's approach of using current-boom AI to help you navigate and digest your own private trove of multimedia content (photos, videos, apps, notes, structured data, etc) is absolutely useful to people as well, and for some of us, one of the only personal uses of this AI that seems compelling at all.
I'm much more excited to have help finding that goofy picture of my cat by describing what I remember, so I can share it with a friend, than I am to have some chatbot dialog about entry-level Python with a hallucinating parlor trick.
But these features have to work, and work well, and work fast, and be widely known to work, before they'll really win the market. But that's going to take a minute and it might not even happen.
Hasn't Google been doing this forever? I can search random things in my photos (like pictures of an old car I owned).
The new LLM-ish tools promise that users can be more vague and casual in what language they use and more elaborate in how specific they mean to be; and that the queries (and operations) can span more diverse data sources.
https://blog.google/products/photos/ask-photos-google-io-202...
I have my sister’s dogs named in my google photos library. Every time I a take a picture of either dog, they are automatically tagged and added to a shared album I set up for my sister.
I have nieces and nephews with photos from newborn age to 10+ years old, and it has managed to organize them across their growth and ages. It’s incredible. I can search for “<niece name> <vacation area>” and get every photo of her on a certain vacation to make a family scrap book.
Apple photos search and tagging is pitiful in comparison.
But that whole find me something on my mutliple gigabytes of storage on my device account definitely seems like the mass appeal
This is such a mundane use of AI, but unsurprising Apple would sell it as revolutionary.
That is in contrast to a lot of fancy AI demos which are a great party trick, but fall apart in actual usage, with their reliability being “maybe it will work this time, maybe it won’t, just keep retrying :)”.
Apple is pretty well-recognized for usually being a bit late to the party, but at least delivering stuff that’s polished.
Just look at this thread of people sharing how Gemini broke all their commands and automations. The Apple Intelligence Siri on the other hand works just fine (even if new features are arriving slowly).
The difference between whether someone has 4 or 5 fingers, whether they're holding a gun versus a random object, or whether they're mixed-race or caucasian, all seem like they would be pretty important things. Likewise, car number plates, signage in the photo that help identify where it is, metadata of the image itself (often more useful than the image), are all incredibly important. All of those are things that AI is absolutely terrible at lmao.
I'm not sure how a bunch of probably-correct keywords that miss a lot of the important aforementioned details in the image is more useful than the image itself, or it's metadata. Both of which would be lost. My point applies equally with respect to image classification, too.
Imagine an AI that popped up when you are reading something and warned you that information is false.
For instance, imagine something like https://theconversation.com/can-ai-talk-us-out-of-conspiracy... helping people discern about news and propaganda.
try asking chatGPT to only give you true and accurate answers and not make anything up.
Now, more seriously, it'd need to put together a coherent argument and back it up with reputable sources, as just citing sources is very ineffective. The article I cited gives more details on possible approaches to that.
With your argument, the problem happens when given person goes to Fox news in the first place. Selection of the source has already been made, with its biases. Not much you can do or expect after this point.
Also, who curates the curator? Again an age old problem with no real, long term working solution in sight. No, you should not expect some statistical model to hold your hand through vast internet, while giving up any form of critical thinking, reasoning, or I guess any cerebral process altogether. Ultimate laziness. Since we know how much money there is in diet fad business, its safe to say this above will find its non-tiny desperate crowd.
There are no sides in objective reality. You might offer competing hypotheses and evidence for those, but we don’t need to do that to know that climate change is real, that it’s caused by humans, and that vaccines work.
The less fraught one is warning users that they’re being scammed: https://arstechnica.com/gadgets/2024/05/google-wants-ai-to-l...
That's even without considering the efforts that would be made to undermine any system that showed any effectiveness. A lot of misinformation, probably most of the stuff that gets traction, isn't random, it's serving a purpose and being pushed for that reason.
It's an unfortunate fact that the people most sick will refuse treatment until it's too late. Seen from the inside, insanity looks like lucidity.
Some personal examples of how I find it more useful (love to hear yours) and fun to use....
- Wanted to go on a hike an hour away from where both my friend (lives two hours west of me) and I live. Asked GPT what are some good hikes an hour drive away from both of us to meet & hike. With Google I have to do Many searches where GPT just provides the answer right away.
- I count calories and eat out everyday. GPT knows the calories of everything i eat as I eat at chains mostly (Cava, Panera, Starbucks, Chipolte). I tell it via voice what i just ate for my 1st meal, it calculates my calorie count and later I'll tell it what im having for my 2nd meal. It can also recall my calorie count from days ago. It does all this quickly vs. Google i'd have to do oodles of searches.
Usually Im using GPT the most when driving via voice and unlike Siri, GPT understands me and i can have whole conversations with it to get things done while driving.
I can't speak to your personal entertainment experience, but AI chatbots are generally a slower, less accurate way of getting information than a google search. (Though Google polluting search results with a big, often inaccurate, AI result at the top narrows this a bit.)
These blanket statements lead to flame wars.
To say it is faster to get information from Google than the latest update to Sonnet is simply absurd to me.
I might have even agreed a few months ago but certainly not now.
For me with the two research examples of using GPT I gave/use it for the information is accurate. My friend and I have driven an hour away (for both) a few times (different spots) and hiked. Same goes for calories GPT has in it's knowledge base for well known chains. If it wasnt a chain restaurant GPT might not have it in it's knowledgebase or possibly have it wrong.
Wake me up when I can say things like…
Hey, Google, are my custom license plates ready for pick up at the tax office?
Hey, Siri, ask my doctor to refill this medicine.
Hey, Alexa, how many charging stations are broken at the gas station on 16th street?
Hey, Google, why is this plant dying?
Hey, Siri, why are there so many people in my neighborhood today?
Hey, Alexa, did anything ever get done about that story in the newspaper from a couple of years ago about the Chinese slave labor being used to grow pot on illegal farms on the Navajo reservation?
"AI" just doesn't have access to the information required to do anything interesting or useful. And because so much of its information comes from the web, which is already so polluted on certain subject (gardening, travel) as to be useless, the AI becomes useless.
Since that seems to be an increasingly niche desire (at least as far as the product managers are concerned), I've been looking more and more seriously at setting up my own local voice assistant. My main barrier has been hardware—the mic arrays in the Home devices are surprisingly good and hard to beat with cheap off-the-shelf components, and you need a good mic for good STT.
The hardware is really well done but the software is either over or under-engineered to a stupid degree.
[1]: https://www.home-assistant.io/integrations/openai_conversati...
It’s neat that you can intermix general chatting with HA commands but you’re probably going to find that the old assist is more reliable for commands. What I do like is that you can use a template as your system prompt so you can provide the state of a number of entities and then ask for them with natural language. That works well.
I have an Alexa/Echo voice announcement system set up and have recently tied that into assist so I can do automations like if the garage opens I prompt for “what is the state of the garage?” and announce the result. Makes it feel more humane than the same plain announcements all the time.
I mean honestly, how is it possible Amazon, Apple, Google and Microsoft[0] all keep screwing this up for over a decade now? I literally spent 15 minutes hooking up GPT-4 to the Home Assistant integration, and I was able to semi-reliably[1] control actual devices[2] like air conditioners and smart lights, in a completely natural and ad-hoc way, by talking to my smartwatch on the go, or to a phone, whatever was more convenient at the moment.
It's a really magical experience, a step closer to Star Trek reality. And what makes it possible is not just LLMs being able to deal with natural language, but more importantly, "bring your own API key" model allowing to cut away all the bullshit that FAANG assistants are stuck in.
--
[0] - Ever since they dropped MS Speech API in Windows, and did the Cortana thing. Some 15 years passed, at this point, and I'd still prefer to work with the Speech API than to touch any of the FAANGs' voice assistant - it worked, and worked off-line!
[1] - Works ~90% of the time; some 5% of the time the voice model (from Home Assistant Cloud) misunderstands me, and 5% of the time the LLM gets confused. It's still worth it, because I can actually talk to it like to a person, without thinking of style or grammar or magic keywords.
[2] - Which, given the level of integration of Home Assistant companion app with the phone, can be easily turned into an equivalent of on-phone voice assistant that can do more than the one I got from Google. Critically, there are ways to couple Home Assistant app and Tasker, so it's not hard to make it do arbitrary things on your phone. And, if you don't like low-ish code Tasker experience, you can trivially shell out from Tasker to Termux, at which point sky is the limit. Point being, an enthusiastic non-developer with minimal tech aptitude can beat Google and Apple at the voice assistant game today.
Teach a man to fish!
Also, being able to talk to my mailbox asking questions about subjects mentioned in my e-mails would be a huge time saver.
Imagine a purely local Microsoft Recall-like thing that could answer questions about things you saw, or that read the news articles you went over quickly and answer complicated questions about them much later, at a time you just started to regret not having bookmarked it for future reading.
https://www.theverge.com/2024/8/14/24220178/halide-camera-ap...
I'm having a horrifying realization that all of my pictures are "fake" in the sense that they don't match what I saw/experienced. Maybe it's time to get back into Polaroids.
I'd caution that judicious/proper post-processing is actually needed if you want that result, because of the differences between the sensors.
Your human experience comes from many small pictures taken by a set of lenses panning across multiple points in a scene with constantly adjusting exposure times and focal lengths, all biologically composited into what feels like a single moment.
Trying to fully replicate that with a single artificial picture is going to be deficient in certain ways.
---
Separately, a pet peeve of mine: Too many people have been subtly brainwashed into conflating the "like I was really there" with " like a Hollywood film camera was really there." Then the next thing you know your medieval fantasy game has lens flares in it for no good reason.
But knowing that the phone does a lot of software tweaking to get a picture to look similar to how good a full camera is made me want to switch. I think this was around the time that article about Samsung basically replacing a photo of the moon.
But those pics will be probably further from perceived reality than those enhanced by software (lets not get retarded here and don't brand every data processing as 'AI'). Distortions not only of barrel type, waning brightness towards edges, moire, heavy vignetting, tons of noise, over/underexposition, maybe some dead pixels... thats not how I see my days go by.
If you actually peak under the hood they just pass through weighted selectors, no different than a switch statement
Nah.
Smartphone cameras stopped being shitty a while ago, long before AI and computational photography hacks.
What you mean to say is they without AI, you'd know sooner that the smartphone maker put a cheap, shitty camera in your "premium" phone.
>People do care about that. AI isn't just LLM chatbots.
Yeah, it's also fake image generation featuring humans with a funny number of fingers.
What AI isn't is a camera.
This assumes that the capabilities and use cases are unchanged. Yes, for the AI features available today, I suppose ChatGPT can do much of it -- I wouldn't know because it's not interesting or useful to me, so I don't use it.
But: If I'm deciding whether AI features are important to me in making a decision to spend money on a future phone, it's those future AI features that I will be assessing.
95% of my everyday needs for an external intelligence (besides my own) are covered by e-mail, text, and phone calls with other humans, with a trivial portion covered by nascent AI features. As this changes, and AI gets more capable of replacing human intelligence in these interactions (TBD if this happens in the next smartphone generation, or the next human generation, or further in the future), then I will /very much/ care that the electronic device that I use most often day-to-day has access to these capabilities, and will very much use access to those capabilities as part of deciding where to spend my money.
True, but it is an external service with all the privacy concerns that entails. I appreciate eg. Apple pushing local AI but at the same time I don’t think it needs to ship with the OS. Just provide an AI API apps can hook into then I can decide which models I want and where they run.
Weird! Mine can still create reminders, as well as set timers and alarms.
Seems to me the problems are (1) the "assistants" aren't anywhere near good enough to be trusted to make the right decisions, and (2) a trustworthy assistant isn't compatible with the adtech business model, so it's unlikely facebook or google would produce such a thing.
The CEO would trust the personal assistant to do this if they have a deep trust in the assistance competence. They would also need to know the assistant has a deep enough understanding of the their preferences to not do something they don't like. AI can mirror that.
More importantly though there will be consequences if the human assistant makes a big mistake and books the wrong flight. They would have to take responsibility for the mistake.
The LLM is always just going to write in text it is sorry if it makes a mistake. That is never going to be good enough for anything of consequence. The LLM would practically have to be omniscient in a way that is not going to be possible in a world filled with uncertainty.
So much of human activity is built around the network of trust that another human takes the blame if something goes wrong. So much activity involves coin flips and that someone takes the blame when the coin lands on heads but we bet on tails.
Yes, why wouldn’t I? I could also give them parameters like “if I’m more than 20 minutes late please re-schedule this” or “if my flight is delayed please let everyone know it’s delayed”
Why wouldn’t I do that? Presumably the person hired is competent to make determinations within parameters specified.
I could also let them know when it’s inappropriate to do this. Again, they should be competent enough to discern the differences between when it is and isn’t appropriate.
This could honestly be done by an algorithm if you give it the correct inputs and outputs and it could be fed updates, the only real limit is the fact that some of this isn’t exposed via an API either in a timely fashion or at all
That years of training is what we are missing. I don't think modern AIs can be trained in the way the assistants of old could be, at least not yet.
Being able to collate the requisite inputs from outside sources is the real problem. If you can’t do that reliably it’s simply hard to build an algorithm around it. Flights for example would require your calendar program to reliably pull data from an API regarding the flight information that is current and effectively real time. That’s the actual hard part, and this expands across services.
For all the advances we have made with computers and smartphones in particular they suck at meaningfully exposing a way to collate data sources and create actions around them reliably
Having one running locally helps but it's still necessarily storing information that you might not want to have stored where someone could potentially retrieve it, either via some sort of exploit or by forcibly compelling you to give it up.
Today, this is nearly available, nearly. Probably only something Google/apple can realistically offer. Apple “intelligence” has started to read your notifications and rewrite them for you, so it shouldn’t be a big leap to listen for a United App notification and decide it’s urgent enough take action. Should be “trivial” for Google to do as well, and they could even run it server side to help without a phone present.
It's still pretty terrible though
AI is anything automated it seems, and now they’re being subcategorized into niches as to what they do, e.g. “Agentic AI”, “LLM backed AI systems” etc.
If it’s not real intelligence then it isn’t really AI, and I wish the world at large would call it out.
LLM, Machine Learning, Neural Networks etc are all great but none of them have true spontaneous intelligence or learning ability.
Please, someone point out how any of these systems have organic spontaneous learning ability for a subject it was not pre-data seeded on. This is a generally accepted measure of higher level sentience as far as I’m aware
Hence the predicated “artificial”, and hence the downvotes you are currently receiving.
Your message is largely, if not entirely, a strawman.
There has been considerable success in programming computers to draw inferences, for example, but not actual reasoning. You can mimic some forms of reasoning but you can’t take one ML set - like recognizing photos with mountains, then expect it to correctly identify a similar geographical element - a hill. It can’t do that. It may correctly identify that it’s not a mountain but that isn’t the same thing as actually learning it’s similar to a mountain but not the same, which would be a rudimentary definition of a hill that an intelligent entity could conceivably use if it knew what a mountain was but not a hill.
Machine Learning was always a more honest place to have This discourse. I am indeed pushing back on the idea that we should be calling ChatGPT or anything like it intelligence.
It’s Machine Learning, clever algorithms, Large language Models, among other things, that are trained on ways to mimic certain aspects of intelligence, but it does not actually possess any real intelligence. Look at the LLM hallucination problem for example. It can’t be self corrected because it’s not an intelligent system.
Moving the goal post on what AI means (and pushing AGI as some new goalpost) is disingenuous, and relatively recent.
I’d care not if it wasn’t for the fact there is so much misinformation around capabilities and the future of AI, that it’s already negatively crept into policy making for example.
2. How does it know which contacts to contact? Does that acquaintance you talked to for some professional reason need to know your flight got rescheduled? What about that travel agency you talked to last night to confirm the flight?
If they had an app then an AI assistant should be able to tie things together. Where things seem to be going is apps provide an intent-based API wrapper plus UI widgets to interact with it. That way assistants can operate them too.
I use reminders often so I suppose it is a low failure rate.
But, when they first made the change to Gemini I had to switch back for a few weeks/months before it could set reminders properly.
ETA: The joke is I don't even have Google Tasks installed, so I am not sure what effect those Tasks items would have. They might only surface on the side pane of Gmail.
Like when a timed alarm is making the phone buzz in my pocket while I'm driving, I'm telling it to silence the alarm, and the response-voice is regretfully informing me that there are no alarms going off right now.
They are already doing planned obsolescence, with hard to replace batteries, limited software support combined with closed systems, etc... But they can't abuse it, as regulatory agencies are already after them and customers are starting to notice. They had a go with cameras, as it is a significant differentiator, but nowadays, most smartphone cameras are as good as they can be for what they are used for.
And it turns out that AI is the big hype right now, so of course, they are using as a selling point.
The funny part is that "AI" (machine learning techniques) have been running in smartphones for a while (I'd say a decade), hidden in camera software. How do you think these tiny cameras can do pictures that look so good? But they mostly kept quiet about it, as it had implications of making "fakes". And yeah, stuff like Siri that works mostly on servers, you don't need a phone with "AI capabilities" for that, just internet access, but they are certainly going to put in in their ads.
Those LLMs we have around just aren't that.
I wish there was a middle ground where I could have my phone be dumb enough to keep me from playing on it all the time, but secure enough that it makes sense for me.
[1] https://www.okta.com/blog/2020/10/sms-authentication/ I'm not affiliated with them, just the first article I found on the topic
(opinion my own)
She can't do or understand the most astoundingly basic stuff. I guess maybe it "feels" worse now because most LLM's are pretty good at understanding your meaning/intention, but my god, it's so bad that if I were in charge of that product I'd rip it out entirely. There's no way anyone finds any real use out of it.
another fun problem is siri not recognising my speech despite me not having a particularly strong accent, speaking slowly, and enunciating. i’ve gotten into the habit putting on a valley girl or bbc news anchor voice while using siri since that usually works.
whoever is in charge of siri needs a reality check, the feature is borderline unusable.
Otherwise why don’t they let older devices use their server-side private LLM setup?
> A quarter of smartphone owners (25%) don't find AI features helpful,
So does that mean 75% _do_ find AI Feature helpful?
> 45% are reluctant to pay a monthly subscription fee for AI capabilities
Are 55% happy to pay a monthly fee?
>34% have privacy concerns.
66% have no privacy concerns?
>So does that mean 75% _do_ find AI Feature helpful?
14% find it helpful
>Are 55% happy to pay a monthly fee?
6% are willing to pay
>66% have no privacy concerns?
no stat on this but I think we can assume based on the others that it is not split evenly because that was not the methodology
> no stat on this but...
I think we could presume an answer - if the respondents first received a full accounting of how their phones track and record their lives, along with a full list of who is getting that data.
Ok my next comeback - people are treating AI tools like a musical instrument.
It’s like picking up a guitar for the first time, twanging a few strings and saying ‘Nah, this sounds shit’. Guitars are useless. I’d never pay for a guitar.
Even chatGPT has a learning curve. I save myself hours or days per day using it for all sorts of things. Anyone who says they can’t find a use for AI is just lazy and hasn’t tried hard.
There are lots of people who could benefit massively from using AI. Who have no moral objection to it. But they just dismiss it so quickly because it doesn’t instantly, magically make beautiful things for them.
Anyone who cannot get something useful done by AI - who does want to do that - is lazy.
It’s like saying in the 19th century - oh this electricity isn’t really useful for anything much beyond lights. What’s the point?
This might be the central point of the conflict. I believe there are a few people who can benefit a great deal. And I think lots of people could probably benefit a little. But I don't think lots of people could benefit massively. It just doesn't work that well.
25% checked the box saying they don't find AI tools helpful, but only 14% said they do. Which means 61% checked neither box.
45% are reluctant to pay, but only 6% said they were willing to. So again, 49% simply didn't say one way or the other.
12% straight-up didn't check any box at all.
But why include people who didn’t pick anything? Surely they should be ignored and it ends up being 64/36.
It's just a bad poll.
Yet they do provide sufficient information for the headline. AI integrations came in 7th place for considerations when upgrading a smartphone.
45% unwilling to pay a monthly fee, 6% willing to pay a monthly fee.
You’ll say I can do most of that in a browser, but then you’ve just moved complexity around.