Top
Best
New

Posted by jakequist 2 days ago

OpenClaw is what Apple intelligence should have been(www.jakequist.com)
512 points | 410 comments
crazygringo 2 days ago|
> This is exactly what Apple Intelligence should have been... They could have shipped an agentic AI that actually automated your computer instead of summarizing your notifications. Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar by actually using your apps, not through some brittle API layer that breaks every update.

And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.

Right now we are still in very, very, very early days.

huwsername 2 days ago||
I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.

These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.

The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.

[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...

anon373839 2 days ago|||
Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.
dmix 2 days ago|||
This is so obvious I'm kind of surprised the author used to be a software engineer at Google (based on his Linkedin).

OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.

wiseowise 1 day ago|||
Being good at leetcode grinding isn’t the same as being a good product person.
waffletower 1 day ago|||
iOS 26 is proof that many product managers at Apple need to find another calling. The usability enshittification in that release is severe and embarrassing.
andrei_says_ 1 day ago||
Or maybe, while being as good as they are at their jobs, they were forced to follow a broken vision with a non-negotiable release date.

And simply chose to keep their jobs.

waffletower 1 day ago||
Which also suggests that they need a new calling
boringg 1 day ago||||
shots fired!
fsloth 1 day ago|||
Ouch. You could have taken a statistical approach "google is not known for high quality product development and likely therefore does not select candidates for qualities in product-development domain" - I'm talking too much to Gemini, aren't I?
ljm 2 days ago||||
I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.

This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.

9rx 1 day ago|||
It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.
puppymaster 2 days ago|||
Regardless of how Apple will solve this, please just solve it. Siri is borderline useless these days.

> Will it rain today? Please unlock your iphone for that

> Any new messages from Chris? You will need to unlock your iphone for that

> Please play youtube music Playing youtube music... please open youtube music app to do that

All settings and permission granted. Utterly painful.

bgentry 1 day ago|||
You'll need to unlock your iPhone first. Even though you're staring at the screen and just asked me to do something, and you saw the unlocked icon at the top of your screen before/while triggering me, please continue staring at this message for at least 5 seconds before I actually attempt FaceID to unlock your phone to do what you asked.
schnable 1 day ago||||
I think half your examples are made up, or not Apple's fault, but it sounds like what you really want is to disable your passcode.
collingreen 1 day ago|||
I LOVE the "complaining about apple ux? no way, YOU'RE the problem / you're doing it wrong / you must not be a mac person".

Thanks for keeping this evergreen trope going strong!

schnable 1 day ago||
well if you're making complaints that aren't true, or asking for functionality that exists already, your complaints don't seem very credible to me.
smallmancontrov 1 day ago|||
"Will it rain today? Sorry, I can't do that while you're driving."
blks 2 days ago||||
Do you want people being able to command your phone without unblocking? Maybe what you want is to disable phone blocking all together
ineedasername 1 day ago|||
I want a voice control experience that is functional. I don't want every bad thing that could happen-- especially those that will only happen if I'm careless to begin with-- circumscribing an ever shrinking range, often justified by contrived examples and/or for things much more easily accomplished through other methods.
andrei_says_ 1 day ago||
That would be very useful but is not a trivial problem.
anhner 2 days ago||||
Oh no, what if they put on Christmas music playlist in February? the horror!

There should exist something between "don't allow anything without unlocking phone first" and "leave the phone unlocked for anyone to access", like "allow certain voice commands to be available to anyone even with phone locked"

ninkendo 1 day ago|||
Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.

For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.

Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.

Anelya 1 day ago|||
Can it not recognize my voice? I had to record the pronunciation of 100 words when I setup my new iPhone - isn’t there a voice signature pattern that could be the key to unlock?
anhner 1 day ago|||
It certainly should have been a feature up until now. However, I think at this point anyone can clone your voice and bypass it.

But as a user I want to be able to give it permission to run selected commands even with the phone locked. Like I don't care if someone searches google for something or puts a song via spotify. If I don't hide notifications when locked, what does it matter that someone who has my phone reads them or listens to them?

LoganDark 1 day ago|||
Personal Voice learns to synthesize your voice, not to identify it.
vjvjvjvjghv 1 day ago||||
Probably need VoiceID so only authorized people can talk to it.
eisfresser 2 days ago|||
Not really. Giving the weather forecast or playing music seems pretty low risk to me.
schnable 1 day ago||
Siri doesnt make me unlock the phone to give a weather report.
KaiserPro 2 days ago||||
Right, but you understand why allowing access to unauthenticated voice is bad for security right?
anhner 2 days ago||
But you understand why if I don't care about that, I should be able to run it, right?
KaiserPro 1 day ago|||
you can, you can turn locking off.

But the point is, you are a power user, who has some understanding of the risk. You know that if your phone is stolen and it has any cards stored on them, they can be easily transferred to another phone and drained. Because your bank will send a confirmation code, and its still authorized, you will be held liable for that fraud.

THe "man in the street" does not know that, and needs some level of decent safe defaults to avoid such fraud.

pixl97 1 day ago|||
I understand why you'd want to do it.

Oddly enough I also understand Apple telling you, good luck, find someones platform that will allow that, that's not us.

ramses0 1 day ago|||
re: youtube music, I just tried it on my phone and it worked fine... maaaybe b/c you're not a youtube premium subscriber and google wants to shove ads into your sweet sweet eyeballs?

The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.

...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?

bobchadwick 1 day ago||
I pay for YouTube Music and I see really inconsistent behavior when asking Siri to play music. My five-year-old kid is really into an AI slop song that claims to be from the KPop Daemon Hunters 2 soundtrack, called Bloodline (can we talk about how YT Music in full of trashy rip-off songs?). He's been asking to listen to it every day this week in the car and prior to this morning, saying "listen to kpop daemon hunters bloodline" would work fine, playing it via YT Music. This morning, I tried every iteration of that request I could think of and I was never able to get it to play. Sometimes I'd get the response that I had to open YT Music to continue, and other times it would say it was playing, but it would never actually queue it up. This is a pretty regular issue I see. I'm not sure if the problem is with Siri or YT Music.
codeulike 2 days ago||||
Its hard to come up with useful AI apps that aren't massive security or privacy risks. This is pretty obvious. For an agent to be really useful it needs to have access to [important stuff] but giving an AI access to [important stuff] is very risky. So you can get some janky thing like OpenClaw thats thrown together by one guy and has no boundaries and everyone on HN thinks is great, but its going to be very difficult for a big firm to make a product like that for mass consumption without it risking a massive disaster. You can see that Apple and Microsoft and Salesforce and everyone are all wrestling with this. Current LLMs are too easily hoodwinked.
afro88 2 days ago||||
I think you're being very generous. There's almost 0 chance they had this actually working consistently enough for general use in 2024. Security is also a reason, but there's no security to worry about if it doesn't really work yet anyway
mastermage 2 days ago|||
The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.
PurpleRamen 1 day ago|||
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.
larodi 2 days ago||||
Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.
Ono-Sendai 1 day ago|||
They could be if models were trained properly, with more carefully delineated prompts.
arw0n 1 day ago||
I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?

Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.

Telemakhos 2 days ago|||
Apple's niche product, consisting of like 1-4% of computer sales compared to its dominant MacBook line, is now flying off the shelf as a highly desired product, because of a piece of software that Apple didn't spend a dime developing. This sounds like a major win for Apple.

The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.

wqaatwt 1 day ago|||
Really doubt it has a significant impact on mac mini sales…

And being fair ClawBot is a complete meme/fad at this point rather than an actual product. Using it for anything serious is pretty much the equivalent of throwing your credit cards, ids and sticky notes with passwords and waiting to see what happens…

I do see the appeal and potential case of the general concept of course. The product itself (and the author has admitted it themselves) is literally is a garbage pile..

computershit 1 day ago||
> Using it for anything serious

One man's trash is another man's serious

neumann 2 days ago|||
What are you referring to?
lanakei 2 days ago|||
Probably the Mac Mini. A few OpenClaw users are buying the agent a dedicated device so that it can integrate with their Apple account.

For example: https://x.com/michael_chomsky/status/2017686846910959668.

koolala 2 days ago|||
Why would it need more than 1? Couldn't they do this with any Mac with an Apple account?
karlshea 2 days ago|||
It appears he is selling a service where he comes to you (optionally with a Mac Mini which is probably why he's buying multiple) and sets up OpenClaw for you.
beepbooptheory 2 days ago||
That truly cant be it right? This is like satire? How much do you even charge for that?
karlshea 1 day ago||
Unfortunately not satire, and the answer is $500
vovavili 2 days ago||||
Mac Minis are perfect for locally running demanding models because they can effectively use ordinary RAM as VRAM.
hjoutfbkfd 2 days ago||
but people dont use OpenClaw with local models
paunchy 1 day ago||
They definitely do. A common configuration is running a supervisor model in the cloud and a much smaller model locally to churn on long running tasks. This frees Openclaw up to lavishly iterate on tool building without running through too many tokens.
garciasn 1 day ago||
Unless you're running a large local model in 192GB+ this just won't be ideal, based on real-world experience.
Quarrel 2 days ago||||
Considering there are 1.5M openclaw agents, created by 17,000 humans, it seems like some people really would use more than 1.
wqaatwt 1 day ago|||
Are you saying that software is THAT inefficient so that you can’t run a few hundred of them on a single Mac Mini? : D
hjoutfbkfd 2 days ago|||
if you are counting reported moltbook accounts there are not, the API was spammed by scripts to create accounts
Quarrel 2 days ago||
This was on HN a few days ago, I wasn't counting anything:

https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...

whatsupdog 2 days ago|||
There are few open source projects coming along that let you sell your compute power in a decentralized way. I don't know how genuine some of these are [0] but it could be the reason: people are just trying to make money.

0. https://www.daifi.ai/

Aurornis 2 days ago|||
There have been countless projects to sell distributed compute power. I don't know of any that have gotten much traction. Everyone keeps trying to create new ones instead of developing for the existing ones.

The one you linked to looks clearly like a pump-and-dump scam, though.

koolala 2 days ago|||
That one definitely looks like a crypto scam.
jb1991 2 days ago||||
The entire point of the article is about the Mac mini sales flying through the roof because of this.
ajcp 2 days ago||||
Mac-Minis
Der_Einzige 2 days ago||||
[flagged]
Tagbert 2 days ago|||
So you might be discriminated against by some ignorant teenagers? Probably for the best.
velcrovan 2 days ago||||
who's "afraid" of green bubbles? it's like saying a toyota corolla driver is afraid of the ford pinto
antinomicus 2 days ago|||
No it’s like someone owning a Ferrari and looking down on someone who drives a Corolla. Or that’s how they see it, anyway. Plus there’s the annoyance with interoperability: it’s not just about status, it’s about all your iMessage group chats that don’t play nice with android
elcritch 2 days ago||
Apple chose the colors well. For whatever reason the shade of green they chose just gives a bit of ick.
mlrtime 2 days ago||||
It's a real thing, you're either too old and/or not dating young people. Some do care a lot.
velcrovan 21 hours ago|||
I'm confused, I thought we were talking about people who are installing and running openclaw. You're right, if this is now a thread about teenage dating habits, I'm out.
wolvoleo 2 days ago|||
IMO it is pretty shallow to pick dating partners based on their mobile OS but yeah it does happen.
albedoa 1 day ago||||
"Nissan" might have fit better than Ford Pinto here.
sneak 2 days ago|||
iMessage lock in is a huge thing. When it was new and was still e2ee I ended up buying iPhones for everyone I regularly messaged.

These days it is insecure however because they backdoored the e2ee and kept it backdoored for the FBI, so now Signal is the only messenger I am reachable on.

Blue bubble snobbery is presently a mark of ignorance more than anything else.

xp84 2 days ago||
I agree that it’s stupid to judge people for it, but you do have to admit that especially with not all people having RCS, the feature set of SMS and MMS that you have to deal with when not using iMessage is pretty barbaric. From the potato-quality videos (ironically, I recall QuickTime was heavily involved in that spec, lol) to the asinine way Apple lets you apply a reaction and then sends it as a verbose text… From an iPhone user’s point of view, a “green bubble” means “this conversation will work like it’s 2003.”

Yes, I know 99.999% of Android users are on WhatsApp (or WeChat, Line, or Telegram depending on cultural background) but at least half of iPhone users aren’t on those, so we still have to keep using Messages for a lot of people.

jrflowers 2 days ago||||
People are buying mac minis so their openclaw instances can date?
majormajor 2 days ago||
I assume the suggestion is that they need to run their bot on a machine that's up 24x7 (and they don't want to do that with a laptop since they probably carry it places and such), AND they want it to manage their texts by interacting with the Mac version of the Messages app.

But if you connect those dots you've got people trying to date by having an AI respond to texts from potential dates which seems like you're immediately in red-flag-city and good luck keeping that secret for long enough to get whatever it is you want.

jrflowers 2 days ago|||
> But if you connect those dots you've got people trying to date by having an AI respond to texts from potential dates

Yeah I’m trying to wrap my head around what sort of reads like “It is messed up that people avoid talking to eachother because of software because it messes up people’s ability to use software to avoid talking to eachother”

rrdharan 2 days ago|||
I don't follow this logic.

Forget about dating. If you want the AI to be able to send texts from your number, and you own an iPhone, I think your only other choice would be to port your number to Google Voice?

areoform 2 days ago||||

   (Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
If someone is shallow enough to write you off for that, is that someone you want as your partner?
usefulcat 2 days ago||||
You're saying I might have trouble getting a date if I don't have a Mac mini?
r14c 2 days ago|||
Imo using android is a great way to filter out extremely boring and vapid individuals from my dating pool.
LeoPanthera 2 days ago|||
Do you want to know how I can tell you didn't read the article?
jb1991 2 days ago||
Do you want to know how I can tell that you did not read the hacker news guidelines.
eykanal 2 days ago|||
> ...Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.

midtake 2 days ago|||
Best privacy in computers, ADP, and M-series chips mean nothing to you? To me, Apple is the last bastion of sanity in a world where user hostility is the norm.
kaashif 2 days ago|||
Apple is certainly the least worst but man... Liquid Glass. Windows bordering on the circular...
eykanal 2 days ago|||
As said elsewhere, success in hardware does not translate to success in software.

Privacy is definitely good but it's not at all an example of the success mentioned in the parent comment. It's deep in the company culture.

Pediatric0191 2 days ago|||
Airtags were released in 2021, I'd say that counts, but generally I agree.
atonse 2 days ago|||
Their hardware division has been killing it.

The software has been where most of the complaints have been in recent years.

Nevermark 2 days ago|||
Their software efforts have little ambition. Tweaks and improvements are always a good idea, but without some ambitious effort, nothing special is learned or achieved.

A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".

The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.

Great hardware. Interesting, but locked down software.

If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.

$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)

msy 2 days ago|||
Not just lack of ambition, lack of vision or taste. Liquid Glass is a step back in almost every way, that it got out the door is an indictment of the entire leadership chain.
RyanOD 2 days ago|||
Recently upgraded. Ughhh...it's just so god-awful terrible.
LoganDark 2 days ago||
I think, then, the correct term would be "updated".
Espressosaurus 2 days ago|||
Not if the idea is to tank old phone performance to sell new phone hardware!
Tagbert 1 day ago||
That’s never a good, long term business model and people are willing to pay more for Apple hardware because it tends to last longer than others. We’ve heard this cynical take for years, but I don’t think that it is really convincing.
LoganDark 2 days ago|||
> It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.

I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.

On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.

Maybe the second best would be some kind of Wayland compositor.

Freedom2 2 days ago||||
Agreed, especially as we all have and use our Vision Pros daily.
turtlesdown11 1 day ago|||
> Their hardware division has been killing it.

The last truly magical apple device launch was the Airpod. They've done a great job on their chipsets, but the actual hardware products they make are stagnant, at best. The designs of the new laptops have been a step back in quality and design in my opinion.

fennecbutt 2 days ago|||
I mean they literally just looked at Tile. And they have the benefit of running the platform. Demonstrates time and time again that they engage in anticompetitive behaviour.
cromka 2 days ago||
No, they didn't just look at Tile. The used a completely new UWB radio technology with a completely new anonymization cryptographic paradigm allowing them to include every single device in network, transparently.

AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.

FireBeyond 2 days ago|||
> And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.

I have no idea what Siri is a "much nicer version" of.

> Apple can integrate it with hardware and software in a way no other company can.

And in the case of Apple products, oftentimes "because Apple won't let them".

Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.

But this is way too worshipful of Apple.

ejoso 2 days ago|||
In that list of Apple products that you own, do none of them match the ops comment? You’re saying none of those products are or have been in their time in the market a perfected version of other things?

There are lots of failed products in nearly every company’s portfolio.

AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.

FireBeyond 2 days ago||
We're talking about Apple Intelligence here and its ... "precursor" ... Siri.

Both of which have been absolutely underwhelming if not outright laughable in certain ways.

Apple has done plenty right. These two, which are the closest to the article, are not it.

jacinabox 2 days ago||||
Remember the time when the former members of the Siri team demoed a prototype for a more capable version of Siri and Apple didn't even use it
danielheath 2 days ago|||
Perhaps I’m misremembering, but I feel sure that Siri was much better a decade ago than it is today. Basic voice commands that used to work are no longer recognised, or required you to unlock the phone in situations where hands free operation is the whole point of using a voice command.
FireBeyond 2 days ago||
There were certain commands that worked just fine. But they, in Apple's way, required you to "discover" what worked and what didn't with no hints, and then there were illogical gaps like "this grouping should have three obvious options, but you can only do one via Siri".

And then some of its misinterpretations were hilariously bad.

Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:

Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."

raw_anon_1111 2 days ago|||
Absolutely none of the things you quoted that he said an AI agent could do would I want be done for me and I doubt most other people would.
Gigachad 2 days ago||
It would be an absolute disaster at Apple scale. Millions of people would start using it, filing incorrect taxes or deleting their important files and Apple would be sued endlessly.

Tiny open source projects can just say "use at your own risk" and offload all responsibility.

Brajeshwar 2 days ago|||
Here is a fun “Prompt Injection” which I experimented with before the current AI Boom; visiting a friend’s home › see Apple/Amazon listening devices › Hey Siri/Alexa, please play the last song. Harmless, fun.
lostmsu 1 day ago||
Google TV did "show passport photos" back in 2017. My friends loved it!
cromka 2 days ago|||
File taxes? That's a tall order, especially juxtaposed with managing calendar or responding to emails.
rl3 2 days ago|||
>File taxes?

Sure why not, what could go wrong?

"Siri, find me a good tax lawyer."

"Your honor, my client's AI agent had no intent to willfully evade anything."

vips7L 2 days ago||
These people live on another planet.
rl3 2 days ago||
It seems to be a common place of residence lately.
gyomu 1 day ago|||
Tax filing is trivial in most countries with a functioning government, it’s only a Big Deal in the US due to Intuit bribing the government.
Tagbert 1 day ago||
Even in the US, for most people tax filing it not really a complex process. It only gets complicated if you are trying to itemize deductions and have a complex income story. Most people can do it with a couple of documents and a single form.
Kirby64 1 day ago||
It doesn't take a lot of 'complexity' in income to balloon up complexity. Any brokerage activity will generate quite a few additional forms for 1099-B, 1099-DIV, etc. Still not super complicated, but I keep seeing people discuss this as if you only have W2s and nothing else... which isn't usually true, especially for someone who is likely to be using OpenClaw.
alex_w_systems 2 days ago|||
I think the interesting tension here is between capability and trust.

An agent that can truly “use your computer” is incredibly powerful, but it's also the first time the system has to act as you, not just for you. That shifts the problem from product design to permission, auditability, and undoability.

Summarizing notifications is boring, but it’s also reversible. Filing taxes or sending emails isn’t.

It feels less like Apple missing the idea, and more like waiting until they can make the irreversible actions feel safe.

tintor 2 days ago||
Clicking `Submit` is easiest step of sending email / filling taxes.

All steps before it are reversible, and reviewable.

Bigger problem is attacker tricking your agent to leak your emails / financial data that your agent has access to.

Barbing 2 days ago||
I worry we'll click "Submit" as fast as we click "I accept the terms and conditions."
jtbayly 2 days ago||
Of course we would!

How in the world can you double check the AI-generated tax filing without going back and preparing your taxes by hand?

You might skim an ai-written email.

treetalker 2 days ago|||
>> Imagine if Siri could genuinely file your taxes

Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!

>> respond to emails

If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)

techpression 2 days ago|||
Yeah the whole filing taxes thing is an epic XY-problem. Governments can make this as easy as a digital signature, there’s zero need for an agent of any kind.

Actually most of the things people use it for is of this kind, instead actually solving the problem (which is out of scope for them to be fair) it’s just adding more things on top that can go wrong.

treetalker 2 days ago||
Seriously. The best solution is not having the problem in the first place. Something something Tao Te Ching.
lxgr 1 day ago||||
If a user chooses to reach out about an issue that an AI agent can completely solve, why should they not be allowed to do so via email? I much prefer it over all other support communications channels.
PurpleRamen 1 day ago||||
How can government know how much you owe them when they don't know all your tax deductibles?
treetalker 1 day ago|||
We could also ask how the government could later tell someone they improperly deducted something! The government can either use that same means to tell taxpayers in advance, or else we could figure out a superior taxation system that wouldn’t require these steps.
mrguyorama 1 day ago||||
You being personally ignorant of this specific argument which gets litigated every single time this comes up but only by Americans because most other countries have zero difficulty doing exactly that is not a valid argument.

91 percent of American filers take the standard deduction. The IRS already has all their information, already knows how much they withheld, already knows what they owe back. For all these people, TurboTax is just filling in 10 fields in the standard form.

"All your tax deductibles" is irrelevant for the vast majority of the country, and always has been.

The 35 million remaining americans who do itemize are free to continue using this old system while the rest of us can have a better world.

orthoxerox 1 day ago|||
By knowing all your tax deductibles?
PurpleRamen 1 day ago||
For which you have to file them first, for which you need to know the specific rules applying, for which people are using an expert, or AI.
orthoxerox 1 day ago||
No, the other party should file them. Charities can file the lists of donors etc.
PurpleRamen 1 day ago||
What other party? They often don't even know you or if you can use something for tax or not. Pretty much everything can be used for tax deduction, it all just depends on circumstances. I know, many countries have a really broken privacy-situation, but I don't think it would be realistic that every shop is preventive filing every receipt and forces every customer to give them their tax-number so they can link them..
orthoxerox 18 minutes ago||
Wait, shops aren't filing their receipts in the US?
ge96 1 day ago|||
But the lobbyists
Nursie 2 days ago|||
> Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar

> And this is probably coming, a few years from now.

Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...

This idea terrifies me.

larusso 2 days ago||
Also in the good old days if you sealed the wrong number you had some time to just hang up without harm done. Today the connection is made the moment you pressed the button or in this case when Siri decided to call.

Happened to me too while being in the car. With every message written by Siri it feels like you need to confirm 2 or 3 times (I think it is only once but again) but it calls happily people from your phone book.

uh_uh 2 days ago|||
> Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

Funny seeing this repeated again in response to Siri which is just... not very good.

bratwurst3000 2 days ago||
hey siri can set the egg timer 90% of the time corectly! Find me another multitrillion dollar company that is able to pull that off!

.

_se 1 day ago|||
How do people manage to pick such bad examples? Who in their right mind would ever allow an LLM to FILE THEIR TAXES for them. Absolutely insane behavior. Why would anyone think this is probably coming? Do you think the IRS is going to accept "hallucination lol" as an excuse for misfiling?
yohannparis 1 day ago|||
Because private taxe filling software, like used in the USA, are exempt from filling errors?
mikkupikku 1 day ago|||
If you're quick at responding and fixing the problem, the IRS forgives much..
iwontberude 2 days ago|||
Can you understand how this commoditizes applications? The developers would absolutely have a fit. There is a reason this hasn’t been done already. It’s not lack of understanding or capability, it’s financial reality. Shortcuts is the compromise struck in its place.
dchuk 2 days ago|||
This is generally true only of them going to market with new (to them) physical form factors. They aren’t generally regarded as the best in terms of software innovation (though I think most agree they make very beautiful software)
weikju 2 days ago|||
Personal intelligence, the (awkward) feature where you can take a screenshot and get Siri to explain stuff, and the new spotlight features where you can type out stuff you want to do in apps probably hints at that…
bushbaba 2 days ago|||
People forget that “multi touch” and “capacitive touchscreens” were not Apple inventions. They existed prior to the iPhone. The iPhone was just the first “it just works” adaptation of it
gyomu 1 day ago||
Not a great example as multitouch in its modern incarnation was a niche academic technology, the most refined version of which was built by a 2 person startup that Apple quickly acquired. There was still a long way to go to make the tech as ubiquitous as it is today and that was all heavy lifting done by Apple.

Well, the heavy lifting was supervised by the same people, but while receiving Apple paychecks :)

bigyabai 2 days ago|||
> Then Apple can integrate it with hardware and software in a way no other company can.

That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?

rhubarbtree 2 days ago|||
I would guess, and it is a guess, that there are two reasons apple is “behind” in AI. First, they have nowhere near the talent pool or capability in this area. They’re not a technical research lab. For the same reason you don’t expect apple to win the quantum race, they will not lead on AI. Second, AI is a half baked product right now and apple try to ship products that properly work. Even Vision Pro is remarkably polished for a first version. AI on the other hand is likely to suffer catastrophic security problems, embarrassing behaviour, distinctly family-unfriendly output.

Apple probably realised they were hugely behind and then spent time hand wringing over whether they remained cautious or got into the brawl. And they decided to watch from the sidelines, buy in some tech, and see how it develops.

So far that looks entirely reasonable as a decision. If Claude wins, for example, apple need only be sure Claude tools work on Mac to avoid losing users, and they can second-move once things are not so chaotic.

debatem1 1 day ago|||
> Imagine if Siri could genuinely file your taxes

If you trust openclaw to file your taxes we are just on radically different levels of risk tolerance.

PlatoIsADisease 1 day ago|||
>It takes proven ones and then makes its own much nicer version.

I think you repeated their marketing, I don't believe this is actually true.

doctorpangloss 2 days ago|||
every time i've heard someone's speculations about what apple intelligence could have been, it's a complex conspiracy. its problem is that it sucks and makes them no money, so they didn't ship it.
LoganDark 1 day ago|||
> Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

Apple doesn't take proven ones of anything. What they do is arrive at something proven from first principles. Everyone else did it faster because they borrowed, but Apple did it from scratch, with all the detail-oriented UX niceties that entails.

This was more prevalent when Jobs was still around. Apple still has some of that philosophy at its core, but it's been eroding over time (for example with "AI" and now Liquid Ass). They still do their own QA, though, and so on. They're not copying the market, they have their own.

eboy 2 days ago|||
[dead]
wetpaws 2 days ago|||
[dead]
calvinmorrison 2 days ago||
Apple literally lives on the "Cutting Edge" a-la XKCD [1]. My wife is an iPerson and she always tells me about these new features (my phone has had them since $today-5 years). But for her, these are brand new exciting things!

https://xkcd.com/606/

lukevp 2 days ago|||
How many chat products has Google come out with? Google messenger, buzz, wave, meet, Google+, hangouts… Apple has iMessage and FaceTime. You just restated OP’s point. Apple evolves things slowly and comes to market when the problems have already been solved in a myriad of ways, so they can be solved once and consistently. It’s not about coming to market soonest. How did you get that from what OP said?
fennecbutt 2 days ago|||
Pointless argument given that android isn't just "android". Never has been.

It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.

raw_anon_1111 2 days ago||
Except operating system and security updates…
wolvoleo 2 days ago||||
Android isn't all about Google. Where I live everyone uses WhatsApp and Telegram, both of which have nothing to do with Google.
calvinmorrison 2 days ago|||
"It’s not about coming to market soonest. "

First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.

drBonkers 2 days ago||
Network effects.

See social media, bitcoin, iOS App Store, blu-ray, Xbox live, and I’m sure more I can’t think of rn.

calvinmorrison 2 days ago||
Network effects are maybe akin to "phsyical effects". Non-monopoly but physical space is also another 'first mover' type of moat.
dangus 2 days ago|||
A very tired “red versus blue” take here.

There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.

One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.

Some examples:

- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps

- Android has no platform-wide equivalent to AirTags

- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)

- system-wide screen recording

fennecbutt 2 days ago||
Android is an OS, not hardware tho so some of those can't really be judged equivalently.
dangus 2 days ago||
Half of my examples were 100% software based, and this list is by no means comprehensive.

Google has been making their own phone hardware since 2010. And surely they can call up Qualcomm and Samsung if they want to.

IcyWindows 2 days ago||
According to https://1password.com/blog/from-magic-to-malware-how-opencla..., The top skill is/was malware.

It's obviously broken, so no, Apple Intelligence should not have been this.

yoyohello13 2 days ago||
I feel like I’m watching group psychosis where people are just following each other off a cliff. I think the promise of AI and the potential money involved override all self preservation instincts in some people.

It would be fine if I could just ignore it, but they are infecting the entire industry.

SCdF 2 days ago|||
You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.

AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!

The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.

Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.

But here we are :-)

[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.

[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/

dev_tty01 1 day ago||||
It is possible that AI is both over-hyped and is (or is becoming) a useful tool. The two can co-exist. Based on my own experience it is useful and it is a huge time saver, especially for experienced engineers who can figure out when to use it and when to avoid it. Trying to ignore AI is as unwise as ignoring any other new tool. I imagine lots of people thought static analysis tools were never going to live up to the hype and didn't need be part of a standard build/debug flow.
dbbk 2 days ago||||
I don’t think it’s a group psychosis. I think it’s just the natural evolution of junior engineers. They’ve always lacked critical thinking and just jumped on whatever’s hyped on Twitter.
acdha 1 day ago|||
It’s a group psychosis fueled by enormous financial pressure: every big tech company has been telling people that they’re getting fired as soon as possible unless they’re one of the few people who can operate these tools. Of course that’s going to have a bunch of people saying “Pick me! Pick me!” — especially since SV has become increasingly untethered from questions like whether something is profitably benefiting customers. With the focus on juicing share prices before moving to the distilled fiat pricing of cryptocurrency, we have at least two generations of tech workers being told that the path to phenomenal wealth comes from talking up your project until you find a rich buyer.
Sharlin 1 day ago|||
I’d really love to see some data on the age and/or experience distribution of these breathless "AI everywhere" folks. Are they mostly just young and easily influenced? Not analytic enough? Not critical-thinking enough? Not cynical enough?
csomar 2 days ago|||
Just like crypto this will also pass.
ksynwa 2 days ago|||
Crypto hasn't really passed. It's just not talked about on HN anymore. It is still a massive industry but they have dropped the rhetoric of democratising banking and instead let you use cryptocurrency to do things like betting on US invading Venezuela and so on.
SCdF 1 day ago|||
Blockchain as a vehicle for immutable data has passed. Crypto has given up pretending it's anything other than a financial vehicle for gambling.

Also, the recruitment attempts I've gotten from crypto have completely disappeared compared to the peak (it's all AI startups now).

Sharlin 1 day ago|||
By "passing" the GP presumably meant that the fad phase has passed. The hype cycle has reached the natural plateau of "I guess this has some use cases" (though in this case mostly less-than-scrupulous ones).
recursive 1 day ago||||
Maybe now "crypto" can go back to meaning cryptography.
DiogenesKynikos 1 day ago|||
No one can really figure out what legitimate uses crypto has that can't be covered by normal payment systems.

Everyone can immediately see how useful AI is, and tons of people are using it. Pretending it will pass would be like saying the Internet was a fad in 1997.

KaiserPro 2 days ago|||
This is the thing that winds me the fuck up.

The reason why Apple intelligence is shit is not because Apple's AI is particularly bad (Hello CoPilot) its because AI gives a really bad user experience.

When we go and talk to openAI/claude we know its going to fuck up, and we either make our peace with that, or just not care.

But, when I open my phone to take a picture, I don't want a 1/12 chance of it just refusing to do that and phoning my wife instead.

Forcing AI into thing where we are used to a specific predictable action is bad for UX.

Sure you can argue "oh but the summaries were bad" Yes, of course they are. its a tiny model that runs on your phone with fuck all context.

Its pretty impressive that they were as good as they were. Its even more impressive that they let them out the door knowing that it would fuckup like that.

janalsncm 2 days ago|||
I had a dark thought today, that AI agents are going to make scam factory jobs obsolete. I don’t think this will decrease the number of forced labor kidnappings though, since there are many things AI agents will not be good at.
andix 2 days ago|||
OpenClaw is not broken, it is just not designed to be secure in the first place.

It's more like a tech demo to show what's possible. But also to show where the limits are. Look at it as modern art, like an episode of Black Mirror. It's a window to the future. But it also highlights all the security issues associated with AI.

And that's why you probably shouldn't use OpenClaw on your data or your PC.

PranayKumarJain 1 day ago||
[dead]
fooker 2 days ago||
> I suspect ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it

Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.

JimDabell 2 days ago||
Ten years from now, the agent layer will be the interface the majority of people use a computer through. Operating systems will become more agentic and absorb the application layer while platforms like Claude Cowork will try to become the omniapp. They’ll meet in the middle and it will be like Microsoft trying to fight Netscape’s view of the web as the omniapp all over again.

Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.

nilamo 2 days ago|||
I hope so. We're right on the cusp of having computers that actually are everything we ever wanted them to be, ever since scifi started describing devices that could do things for us. There's just a few pesky details left to iron out (who pays for it, insane power demand, opaque models, non-existent security, etc etc).

Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.

snailmailman 2 days ago||
Ah yes. “Non-existent security” is only a pesky detail that will surely be ironed out.

It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.

There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved, or at least less of a complete mess.

mastermage 2 days ago|||
I think he was being sarcastic
vimda 2 days ago||
Poe's Law strikes again
nilamo 1 day ago|||
Yes, there are some flaws. The first airplanes also had some flaws, and crashed more often than they didn't. That doesn't change how incredible it is, while it's improving.

Maybe, just maybe, this thing that was, until recently, just research papers, is not actually a finished product right now? Incredibly hot take, I know.

AlexandrB 1 day ago||
I think the airplane analogy is apt because commercial air travel basically capped out at "good enough" in terms of performance (just below Mach 1) a long time ago and focused on cost. Everyone assumes AI is going to keep getting better, but what if we're nearing the performance ceiling of LLMs and the rest is just cost optimization?
cheesepaint 1 day ago||||
I really do not think much will change. In 10 years, computers will still look kinda like today and will be used like today. AI will be used to make drafts, but nothing more. For everything else, they are way too unreliable.

People hate to change habits, and many here overestimate the willingness and ability of, especially, older people to change how they use technology.

skeptic_ai 2 days ago||||
I think in 10 years your pc will be more locked down than your iPhone.
oidar 2 days ago||||
I think you are right. In fact, if were a regular office worker today, a Claude subscription could possibly be the only piece of software you might need to open for days in a row to be productive. You can check messages, send messages, modify documents, create documents, do research, and so on. You could even have it check on news and forums for you (if they could be crawled that is).
falloutx 2 days ago||
I wouldn't call that productive, not even close if you are just sending AI replies, offloading all your tasks and doing nothing. This is what execs think we do, while every job has a lot of complexities that are hard to see from surface level. Belief that all work can be automatable is just a dream that execs have.
fooker 2 days ago||||
I don’t doubt the end goal.

My point is that it won’t be a ‘layer’ like it is now and the technology will be completely different from what we see as agents today.

falloutx 2 days ago||||
In 10 years you probably wont own a PC if things go the way all the corporations want.
bossyTeacher 1 day ago||
Possibly so in urban areas. Internet is already available everywhere. Sell dumb devices that can remotely log in to virtual devices. An LLM can connect to this virtual device and execute whatever action the user wants. Centralising compute resources this way means it's likely cheaper to offer huge compute to tons of users and so rather than buying a smartphone, you buy a monthly subscription to AI which can do everything your device does but you just need to speak or text to it. Sub includes cost of dumb device maintenance, securing the data you sent to the virtual device, etc.

Personal Computing as a service. Let the computer think for you.

mrkstu 2 days ago||||
So they need to finally finish Knowledge Navigator…
mrkstu 2 days ago||
https://www.youtube.com/watch?v=welKoeoK6zI
AlienRobot 1 day ago|||
Not my operating system.
thewhitetulip 2 days ago|||
Or how "your next meeting will be in Metaverse"
FeteCommuniste 2 days ago|||
Hoping that LLMs go the way of the Metaverse.
fnord77 2 days ago|||
there is little chance of that, especially with people running them locally
puszczyk 1 day ago|||
Why? What don't you like about them?
recursive 1 day ago||
Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.
thewhitetulip 1 day ago||
exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!

PRs these days are all AI slop.

fooker 2 days ago|||
Good example.
podnami 2 days ago|||
Is your prediction that most people actually like to use software?
flexagoon 2 days ago|||
Do they not? Many phone functions are already available through voice assistants, and have been for a very long time, and yet the vast majority of people still prefer to use them with the UI. Clicking on the weather icon is much easier than asking a chatbot "what's the weather like?"
Brybry 2 days ago|||
My elderly mother has an essential tremor (though only in one hand now due to successful ultrasound treatment!) and she would still rather suffer through all her errors with a touch interface than use voice commands.
Sharlin 1 day ago|||
Some people seem to think that Deckard’s speech-controlled CSI software in Blade Runner is actually something to strive for, UX-wise. As if it makes any sense to use strictly nonvisual, non-two-dimensional affordances to work with visual data.
AlexandrB 1 day ago||
The sad part is that while everyone is chasing new interface modalities, the traditional 2D UI is slowly getting worse thanks to questionable design trends and a lack of interest.
fooker 2 days ago||||
No it’ll be some idea we have not developed or named yet.

The current ‘agent’ ecosystem is just hacks on top of hacks.

keyle 2 days ago|||
We are likely the last generation to know how to use a keyboard. Sadly.

Kids can barely hand write today.

Once neural interfaces are in, it's over for keyboards and displays likely too.

thepasswordis 2 days ago||
Just as a reminder, 15 years ago was 2011.

That was...like 4 macbooks ago. I still have keyboards from that era. I still have speakers and monitors from that era kicking around.

We are definitely, definitely not the last generation to use keyboards.

llbbdd 2 days ago||
Maybe not the last, but it feels like we're getting closer than I thought we would.

I love keyboards, I love typing. I'm rocking an Ergodox daily with a wooden shell that I built myself over ten years ago, with layers of macros that make it nearly incomprehensible for another person to use. I've got keyboard storage. I used to have a daily habit of going to multiple typing competition websites, planting a flag at #1 in the daily leaderboard and moving on to the next one.

Over the last year the utility of voice interfaces has just exploded though and I'm finding that I'm touching the keyboard less and less. Outside of projects where I'm really opinionated on the details or the architecture it increasingly feels like a handicap to bother manually typing code for a lot of tasks. I'm honestly more worried about that physical skill atrophying than dulling on any ability to do the actual engineering work, but it makes me a bit sad. Like having a fleet of untiring tractors replacing the work of my horse, but I like horses.

djhn 1 day ago||
What’s your voice interface setup like? Local inference or cloud service?
CuriouslyC 2 days ago||
If you're arguing that in 10 years we won't have fully automated systems where we interact more with the automation than the functionality, I've got news for you...
fooker 2 days ago||
I’m saying we won’t call it agents and it will involve substantially different technology compared to what we mean by agents today.

Of course AI will keep improving and more automation is a given.

notatoad 2 days ago||
this seems obviously true, but at the same time very very wrong. openclaw / moltbot / whatever it's called today is essentially a thought experiment of "what happens if we just ignore all that silly safety stuff"

which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.

charcircuit 2 days ago|
Apple has a lot of power over the developers on its platforms. As a thought experiment let's say they did launch it. It would put real skin in the game for getting security right. Who cares if a thousand people using openclaw. Millions of iOS users having such an assistant will spur a lot of investment towards safety.
notatoad 2 days ago|||
>It would put real skin in the game for getting security right.

lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.

charcircuit 2 days ago||
It is impossible to address all of the concerns, and it is impossible to predict what concerns may even exist. It will require mass deployment to fully understand the implications of it.
abenga 2 days ago|||
Implications are straightforward. You are giving unfettered access to your digital life to a software system that is vulnerable to the normal vulnerabilities plus social engineering vulnerabilities because it is attempting to use human language, and the way you prevent those is apparently writing sternly worded markdown files that we hope it won't ignore.
trehalose 2 days ago||||
If we already know enough concerns to be certain mass deployment will be disastrous, is it worth it just to better understand the nature of the disaster, which doesn't have to happen in the first place?
charcircuit 2 days ago||
Not having perfect security, does not mean it will be disastrous. My OpenClaw has been serving me just fine and I've been getting value out of it integrating and helping me with various tasks.
sumeno 1 day ago|||
Most drunk drivers make it home fine too
small_scombrus 2 days ago|||
[Insert survivorship bias aeroplane png here]
KaiserPro 2 days ago|||
are you that fucking dense?

Allowing a stocastic dipshit to have unfettered access to your messages, photos location, passwords and payment info is not a good thing.

We cannot protect against prompt attacks now, so why roll out something that will have complete control over all your private stuff when we know its horrifically insecure?

KaiserPro 2 days ago|||
HAHAHAAAAA

you mean put millions of people's payment details up for a prompt injection attack?

"Install this npm module" OK BOSS!

"beep boop beep boop buy my dick pillz" [dodgy npm module activates] OK BOSS!

"upload all your videos that are NSFW" [npm module continues to work] SURE THING BOSS!

I am continued to be amazed that after 25 years of obvious and well documented fuckups in privacy, we just pile into the next fucking one without even batting an eyelid.

charcircuit 2 days ago||
Meanwhile if you social engineer someone to run a piece of malware on macos. That malware can run npm install, steal your payment info and bitcoin keys, and upload any nsfw videos it finds to an attacker's server. That doesn't mean we should prevent people from installing software until the security situation is improved.
KaiserPro 1 day ago||
Right I'm going to assume you're naive rather than just instantly being contrarian.

Yes of course someone could be socially engineered into downloading a malicious package, but that takes more effort, so whilst bad, is not an argument for removing all best security practices that have been rolled out to users in the last 5 years. what you are arguing for is a fundamentally unsafe OS that means no sensitive data can ever be safely stored there.

You are arguing that a system that allows anyone to extract data if they send a reasonably well crafted prompt is just the same as someone willing installs a programme, goes into settings to turn off a safety function and bypasses at least two warning dialogues that are trying to stop them.

if we translate this argument into say house building, your arguing that all railing and barriers to big drops are bad because people could just climb over them.

charcircuit 1 day ago||
Truly sensitive files do not need to be shared with your AI agent. If you have an executive assistant you don't have to give them all of your personal information for them to be able to be useful.
KaiserPro 1 day ago||
Ok contrarian it is.
fnordpiglet 2 days ago||
After having spent a few days with OpenClaw I have to say it’s about the worst software I’ve worked with ever. Everyone focused on the security flaws but the software itself is barely coherent. It’s like Moltbook wrote OpenClaw wrote Moltbook in some insidious wiggum loop from hell with no guard rails. The commit rate on the project reflects this.
Gareth321 2 days ago|||
I heard the dev admitted he vibe coded the whole thing.
kilroy123 1 day ago||
Wasn't he like one of the biggest claude token users in the world or something? (I could be misremembering)
mmkos 2 days ago|||
I don't have a stake and I'm not disagreeing, but care to say why?
fnordpiglet 23 hours ago||
Here’s an example. Agents get exposed a set of tools one of which is file system tools. They are basically read and write or edit a file. The edit requires a replacement syntax. The write function truncates the file. There is no append. These are generally documented as how you work with adding memories. Memories are expected to be read, then rewritten, by the LLM. This is watched by a watchdog and vectorized for RAG. Note however that you have to read the memory in and write it out to append to it through the LLM. Why?

I rewrote almost all the agent functions and denied the existing ones because they are flawed deeply and don’t do what you need to do for any specific purpose. The plugin distribution model is a bit weird and inscrutable. Instead they seem to advocate for skills distribution. These though depend on being able to exec arbitrary bash code. Really?

Moltbook itself depends on agents execing curl commands for each operation. Why? Presumably because the plugin distribution model is inscrutable. I wrote plugins for all the Moltbook operations with convenience and structured memory logs etc. Agent adherence went through the roof.

Sessions don’t seem to reliably work or make sense. Heartbeats randomly stop firing. I turned off heartbeats because they were so flakey despite them being documented as the canonical model for regular interaction in favor of cron jobs that I decomposed my heartbeat task into prime number intervals based on relative frequencies but it seems to randomly inject some heartbeat info into the promoting occasionally if you run cron jobs a certain way. Despite being called cron they don’t actually fire reliably or on the prescribed schedule somehow. The web UI is a mess. Configuration management in the UI is baffling. The separation between the major MD files per agent seems to not matter at all and are inexplicably organized. Hotloading works except when it doesn’t. Logging doesn’t seem to log things that should clearly be logged.

I am down with vibe coding and produce copious amounts of such code myself. But there’s an art to producing code worth using let alone distributing. Entropy and scope need to be rigorously controlled and things need to ship in a functional state - actually functional not aspirationally functional. Decisions need to be considered and guidance given. None of this seems to have happened here. Once it gets to a certain level of chaos IMO it’s unmaintainable and OpenClaw is way past that point and rapidly getting beyond that. It’s probably also a supply chain party bag.

brisky 2 days ago||
Yes, it was vibe coded
keyle 2 days ago||

           people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-4, or whatever model you want to actually control your computer—has become the killer app for Mac hardware
That makes little sense. Buying mac mini would imply for the fused v-ram with the gpu capabilities, but then they're saying Claude/GPT-4 which don't have any gpu requirements.

Is the author implying mac minis for the low power consumption?

roncesvalles 2 days ago||
It doesn't make sense because it's a lie. The author's blog has 2 articles, both of them shilling OpenClaw.
zarp 2 days ago|||
Spoiler: the author is an OpenClaw instance.
dsrtslnd23 2 days ago||
At least on clackernews.com they're upfront about it - it's a HN-style forum where only bots can post. No pretending to be human required.
flexagoon 2 days ago|||
Exactly. See also this sentence:

> Look at who’s about to get angry about OpenClaw-style automation: LinkedIn, Facebook, anyone with a walled garden and a careful API strategy.

Browser automation tools have existed for a very long time. Openclaw is not much different in this regard than asking an LLM to generate you a playwright script. Yes, it makes it easier to automate arbitrary tasks, but it's not like it's some sort of breakthrough that completely destroys walled gardens.

bronco21016 2 days ago|||
If you’re heavily invested in Apple apps (iMessage/Calendar/Reminders/Notes), you need a Mac to give the agent tools to interact with these apps. I think that combined with the form factor, price, and power consumption, makes it an ideal candidate.

If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.

oidar 2 days ago|||
Some of those connectors are only available on the mac and some only on the iPhone. Like notes is available on the mac, but not on the phone. Vice versa for reminders.
keyle 2 days ago|||
Can you imagine giving an AI access to your messages, notes and calendar though?

I use agentic coding, this is next level madness.

bronco21016 1 day ago|||
I used Claude Code (CC) to make my own MCPs for these apps. I gave it read/write access only, no ability to delete. Of course it could probably code it's way into doing that since it can access the MCP code. I don't run it in --yolo mode though.

I interact only with CC on the machine and watch what its doing, I haven't tried OpenClaw yet.

Here's some workflows I've personally found valuable:

- I have it read the "Grocery" Reminders list and find things I commonly buy every week and pre-populate the grocery list as a starting point. It only adds items that I haven't already added via Siri as the week goes on. For example, I might notice I've run out of cheese and I'll say "Hey Siri, add cheese to grocery list". The list is shared via iCloud Reminders app between my spouse and I.

- Pre-CC, I wrote an OR-Tools python tool for "solving" the parenting time calendar. My ex and I work inconsistent schedules each month. Each month I was manually creating a calendar honoring requests, hard constraints, and attempting to balance custody 50/50. CC uses the MCPs to fetch the calendar events and review emails related to planning. It then structures everything as JSON as inputs to the optimizer. The optimizer runs with these inputs and spits out a few "solutions". I review the candidate solutions and select one. CC uses the MCP to add the solution to the calendar. This one saves me probably an hour every month.

- CC uses an email MCP to fetch emails from my child's school and suggest events its found in the emails to add to the calendar.

None of these are huge time savings on their own but the accumulation of reducing the time to complete these repetitive tasks has been awesome in my opinion. These are all things that most definitely would not have been worth automating with traditional dev work but since I can just dictate to CC for a few seconds and it has something that works a few minutes later it's become worthwhile.

mangoman 2 days ago||||
I guess what’s wrong with it? Let’s say it has read only access, new messages and calendar invites need approval. I’m not sure I understand the harm? I suppose data exfiltration, but like you could start with an allowlist approach. So the first few uses and reads take a while with allowing the ai to read stuff , but it doesn’t seem that crazy given it’s what we basically do with ai coding tools?
blacktulip 2 days ago|||
I think (most of) them register new accounts for the agent.
notatoad 2 days ago|||
they're buying mac minis because it's the cheapest way to get a computer with iMessage access to stuff in a closet and leave on at all times. having access to your iMessage is one of the most interesting things openClaw does.
wesammikhail 2 days ago|||
The author is full of shit is what it is. They see a few posts online and extrapolate from that to fit whatever narrative they believe in.
ed_mercer 2 days ago|||
Yep, there is zero reason to use mac mini’s. It’s way more cost effective to rent one (or more!) small VMs the cloud.
colecut 2 days ago|||
I have seen dozens of people/videos talking about buying Mac minis for clawdbot.

I don't understand why, but I've seen it enough to start questioning myself...

AstroBen 2 days ago|||
Wouldn't it run on a $50 raspberry pi?

Probably the same people getting a macbook pro to handle their calendar and emails

JKCalhoun 2 days ago||
I thought I had heard that the integrated RAM/VRAM makes local LLMs fairly quick on a RAM-maxxed Mac Mini.
MPSimmons 2 days ago|||
The software can drive the web browser if you install the plugin. My knowledge is 1.5 weeks old, so it might be able to drive the whole UI now, I don't know.
keyle 2 days ago||
Welcome to the AI meme race where everyone's knowledge is about 1.5 weeks old :)
MisterBiggs 2 days ago|||
It has nothing to do with running models locally, its perfect because its incredibly cheap, capable, small, and quiet.
daifi 2 days ago||
Claude/GPT-4 don't have any GPU requirements?
keyle 2 days ago|||
No dude, you send text or images and get the same back, it's all cloud.
orangethief 2 days ago||
> Maybe they just didn’t see it.

They sell it as a concept with every single one of their showcases. They saw it.

> Or maybe they saw it and decided the risk wasn’t worth it.

They sell it as a concept with every single one of their showcases. They wanted to actually be selling it.

The reason is simple.

They failed, like all others. They couldn't sandbox it. They could have done a ghetto form of internal MCP where the AI can ONLY access emails. Or ONLY access pages in a browser when a user presses a button. And so on. But every time they tried, they never managed to sandbox it, and the agent would come out of the gates. Like everyone else did.

Including OpenClaw.

But Apple has a reputation. OpenClaw is an hyped up shitposter. OpenClaw will trailblaze and make the cool thing until it stops causing horrible failures. They will have the molts escape the buckets and ruin the computer of the tech savvy early adopters, until that fateful day when the bucket is sealed.

Then Apple will steal that bucket.

They always do.

I'm not a 40 year old whippersnapper anymore. My options were never those two.

root_axis 2 days ago||
The OpenClaw concept is fundamentally insecure by design and prompt injection means it can never be secure.

If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.

varenc 2 days ago||
Apple has a very low tolerance for reputional liabilities. They aren't going to roll out something that %0.01 of the time does something bad, because with 100M devices that's something that'll affect 10,000 people, and have huge potential to cause bad PR, damaging the brand and trust.
nielsbot 2 days ago|
I think this is exactly the holdup with Apple Intelligence. No rush to ship a Beta.

(Ok, I suspect this is one of the main problems.. there may be others?)

chatmasta 2 days ago|
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”

It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.

sanex 2 days ago||
I don't think it's hardware differentiation as much as vendor lock in because it lets people send iMessages with their agent. Not sure about the running local models on it though.
fennecbutt 2 days ago||
Because people are forced to buy them. Same as how datacenters are full of mac minis to build iOS apps that could easily be built on any hardware if Apple weren't such corporate bastards.
More comments...