Top
Best
New

Posted by jakequist 3 days ago

OpenClaw is what Apple intelligence should have been(www.jakequist.com)
513 points | 411 commentspage 10
khalic 2 days ago|
Oh yeah nothing like all my data being sent to a third party and access to all my apps. JFC people…
AlexCoventry 2 days ago||
...And it will be, now that Apple has partnered with OpenAI. The foundation of OpenClaw is capable models.
matt3210 2 days ago||
> Imagine if Siri could genuinely file your taxes

No sane person would let an AI agent file their taxes

alexruf 2 days ago||
Yes, and I am glad OpenClaw built it first, so Apple doesn’t do such a terrible mistake.
semiquaver 3 days ago||
I genuinely don't understand this take. What makes OP think that the company that failed so utterly to even deliver mediocre AI -- siri is stuck in 2015! -- would be up to the task of delivering something as bonkers as Clawdbot?
chefsweaty 2 days ago|
My thoughts. I can see this coming out in iOS 45, announced as a brand new groundbreaking technology.
zombot 2 days ago||
The author must have drunk unhealthy amounts of koolaid.
EGreg 2 days ago||
No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.

OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.

Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.

As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).

Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.

If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.

A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.

I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...

This is how I feel:

https://www.instagram.com/reels/DIUCiGOTZ8J/

PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.

throwaway613746 3 days ago||
[flagged]
bee_rider 3 days ago||
It is absurd enough of a project that everybody basically expects it to be secure, right? It is some wild niche thing for people who like to play with new types of programs.

This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.

DrewADesign 2 days ago||
I think a lot of people have been spoiled (beneficially) by using large, professionally-run SaaS services where your only serious security concerns were keeping your credentials secret, and mitigating the downstream effects of data breaches. I could see having a fundamentally different understanding of security having only experienced that.

What people are talking about doing with OpenClaw I find absolutely insane.

dmix 2 days ago||
> What people are talking about doing with OpenClaw I find absolutely insane.

Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.

[1] https://openclaw.ai/blog/introducing-openclaw

DrewADesign 2 days ago||
In my feeds, I’ve seen activity among several an-LLM-is-my-tech-lead-level newly tech-ish people, who are just plugging their lives into it and seeing what happens.

If this really was primarily tech savvy people prodding at the ecosystem, the top skill available, as of a few days ago, probably wouldn’t be a malware installer:

https://1password.com/blog/from-magic-to-malware-how-opencla...

elictronic 3 days ago|||
Apple had problems with just the Chatbot side of LLMs because they couldn't fully control the messaging. Add in a small helping of losing your customers entire net worth and yeah. These other posters have no idea what they are talking about.
joshstrange 3 days ago||
Exactly, Apple is entirely too conservative to shine with LLMs due to their uncontrollability, Apple likes their control and their version of "protecting people" (which I don't fully agree with) which includes "We are way too scared to expose our clients to something we can't control and stop from doing/saying anything bad!", which may end up being prudent. They won't come close to doing something like OpenClaw for at least a few more years when the tech is (hopefully) safer and/or the Overton Window has shifted.
FireBeyond 3 days ago||
And yet they'll push out AI-driven "message summaries" that are horrifically bad and inaccurate, often summarizing the intent of a message as the complete opposite of the full message up to and including "wants to end relationship; will see you later"?
fennecbutt 2 days ago||
Was about to point out the same thing. Apple's desperate rush to market, summarising news headlines badly and sometimes just plain hallucinating stuff causing many public figured to react when they end up the target of such mishaps.
gordonhart 3 days ago||
Clawdbot/Moltbot/OpenClaw is so far from figuring out the “trust” element for agents that it’s baffling the OP even chose to bring it up in his argument
leric 2 days ago||
[dead]
zombiwoof 3 days ago|
[dead]