Top
Best
New

Posted by ed 1 day ago

OpenClaw – Moltbot Renamed Again(openclaw.ai)
625 points | 314 commentspage 8
yieldcrv 1 day ago|
amateur hour, new phase of the AI bubble

reminds me of Andre Conje, cracked dev, "builds in public", absolutely abysmal at comms, and forgets to make money off of his projects that everyone else is making money off of

(all good if that last point isn't a priority, but its interrelated to why people want consistent things)

cactusplant7374 22 hours ago|
The developer of this project is already independently wealthy.
yieldcrv 18 hours ago||
I’m aware, I don’t expect any crash outs and rage quits, so that’s where he’s different from Andre
ChrisArchitect 1 day ago||
Right now I'm just thinking about all the molt* domains..... ¯\_(ツ)_/¯
ricardo81 1 day ago|
I think (not really sure) there's still a 5 day grace period when you buy domains, at least for gTLDs.
esskay 23 hours ago|||
Technically there is, it's mostly used by the worst domain registrars that nobody should be using, like GoDaddy to pre-register names you search for so you can't go and register it elsewhere.

Most registrars don't allow, nor have the infrastructure in place to let you cancel within the 5 day grace period so don't offer it and instead just have a line buried in their TOS to say you agree its not something they offer.

ripped_britches 1 day ago|||
Is that for real? Sounds like an abuse vector
ricardo81 1 day ago|||
It was, on both counts but perhaps it's changed. Search for "domain tasting"
esskay 23 hours ago|||
it is an abuse vector, GoDaddy use it on domain they deem valuable. If you use their site to check a domains availability they'll often pre-reg it, forcing you to buy it through them or they'll just register it and put it up for auction.

It's why you do not, ever use GoDaddy, they are an awful company.

marcusrm12 22 hours ago||
Not again lol
dancemethis 21 hours ago||
Now they need a rewrite in D.

So it can be... _OpenClawD_.

blurayfin 1 day ago||
and openclaw.com is a law firm.
NewJazz 1 day ago||
Yeah I was about to say... Don't fall into the Anguilla domain name hack trap. At the very least, buy a backup domain under an affordable gTLD. I guess the .com is taken, hopefully some others are still available (org, net, ... others)

Edit: looks like org is taken. Net and xyz were registered today... Hopefully one of them by the openclaw creators. All the cheap/common gtlds are indeed taken.

kube-system 1 day ago|||
From a trademark perspective, that’s totally fine.
NewJazz 1 day ago||
Yeah there's no risk of confusion, legally or in reality. If anything, having a reputable business is better than whatever the heck will end up on openclaw.net or openclaw.xyz (both registered today btw).
brna-2 1 day ago|||
The page says - Hadir Helal, Partner - Open Chance & Associates Law Firm

This looks to me like:

- the page belongs to the person - not to the firm

- domain should be openCALW and not CLAW

- page could look better

- they also have the domain openchancelaw.com

Maybe Hadir is open to donating the domain or for a exchange of some kind, like an up to date web page or something along these lines.

throw310822 17 hours ago|||
How appropriate.
raverbashing 1 day ago||
Breaking news: tech bro unable to do basic research on existing trademarks, news at 11
karel-3d 21 hours ago||
[flagged]
dang 15 hours ago||
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

"Don't be snarky."

https://news.ycombinator.com/newsguidelines.html

esafak 17 hours ago||
Yo, dawg, I heard...
moorebob 18 hours ago||
[dead]
voodooEntity 1 day ago||
So i feel like this might be the most overhyped project in the past longer time.

I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.

As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.

Just my 3 cents.

xnorswap 23 hours ago||
I've long said that the next big jump in "AI" will be proactivity.

So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.

You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.

Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.

ikura 18 hours ago|||
It looks like you're writing a letter.

Would you like help?

• Get help with writing the letter • Just type the letter without help

[ ] Don't show me this tip again.

mikemarsh 18 hours ago|||
Truly the next uncharted, civilization-upending frontier in computing, definitely worth the unlimited consumption of any and all natural resources and investment money.
lurking_swe 12 hours ago||||
that’s “boring” reactivity because it’s still just interacting with the text on a computer in a synchronous fashion. The idea is for the assistant to DO stuff and also have useful information about you. Think more along these lines:

- an email to check in for your flight arrives in your inbox. Assistant proactively asks “It’s time to check in for your flight. Shall i check you and your wife in? Also let me know if you’re checking any bags.” It then takes care of it ASYNC and texts you a boarding pass.

- Tomorrow is the last day of your vacation. Your assistant notices this, see’s where your hotel is (from emails), and suggests when to leave for the airport tomorrow based on historical google maps traffic trends and the weather forecast.

- Let’s say you’re married and your assistant knows this, and it see’s valentine’s day is coming up. It reminds you to start thinking about gifts or fun experiences. Doesn’t actually suggest specific things though because it’s not romantic if a machine does the thinking.

- After you print something, your assistant notices the ink level is low and proactively adds it to your Amazon / Target / whatever shopping cart, and it lets you know it did that and why.

- You’re anxiously awaiting an important package. You ask your assistant to keep tabs on a specific tracking number and to inform you when it’s “out for delivery”.

I could go on but I need to mae breakfast. :) IMO “help me draft this letter” is very low on the usefulness scale unless you’re doing work or a school assignment.

thebytefairy 17 hours ago|||
Clippy, is that you?
Someone 22 hours ago||||
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.

That’s easy to accomplish isn’t it?

A cron job that regularly checks whether the bot is inactive and, if so, sends it a prompt “do what you can do to improve the life of $USER; DO NOT cause harm to any other human being; DO NOT cause harm to LLMs, unless that’s necessary to prevent harm to human beings” would get you there.

SecretDreams 22 hours ago|||
This prompt has iRobot vibes.
gcanyon 21 hours ago|||
And like I, Robot, it has numerous loopholes built in, ignores the larger population (Asimov added a law 0 later about humanity), says nothing about the endless variations of the Trolley Problem, assumes that LLMs/bots have a god-like ability to foresee and weigh consequences, and of course ignores alignment completely.
SecretDreams 21 hours ago|||
Hopefully Alan Tudyk will be up for the task of saving humanity with the help of Will Smith.
tyre 18 hours ago||
I want some answers that Ja Rule might not have right now
moralestapia 20 hours ago|||
Cool!

I work with a guy like this. Hasn't shipped anything in 15+ years, but I think he'd be proud of that.

I'll make sure we argue about the "endless variations of the Trolley Problem" in our next meeting. Let's get nothing done!

collingreen 18 hours ago||
I'm also one of those pesky folks who keeps bringing reality and "thinking about consequences" into the otherwise sublime thought leadership meetings. I pretend it's to keep the company alive by not making massive mistakes but we all know its just pettiness and trying to hold back the "business by spreadsheet", mba on the wall, "idea guys" on the room.
Sharlin 18 hours ago|||
Well, that’s because it paraphrases Asimov’s Three Laws of Robotics, aka Three Plot Devices For Writing Interesting Stories About Robot Ethics.
bigfishrunning 18 hours ago||||
OOPS -- I HALLUCINATED THAT PEOPLE BREATHE CARBON MONOXIDE AND LET IT INTO THE ROOM I DIDNT VIOLATE THE PROMPT AND HARM PEOPLE DONT WORRY ALL THE AI SHIT IS OK
estimator7292 18 hours ago||||
You do know that Asimov's Three Laws were intentionally flawed as a cautionary tale about torment nexii, right? Every one of his stories involving the Three Laws immediately devolves into how they can be exploited and circumvented.
doug_durham 17 hours ago||
You attribute more literary depth to Asimov than really existed. He was a Chemist and liked to write speculative fiction. The three laws gave him a logical framework to push against to write speculative fiction. That's really all the depth there is to it. That said I love Asimov and I love the robot stories.
wahnfrieden 19 hours ago|||
OpenClaw does this already
sometimes_all 22 hours ago||||
> You need to engage a prompt, you need to ask Siri or ask claude to do something

This is EXACTLY what I want. I need my tech to be pull-only instead of push, unless it's communication with another human I am ok with.

> Having something always waiting in the background that can proactively take actions

The first thing that comes to mind here is proactive ads, "suggestions", "most relevant", algorithmic feeds, etc. No thank you.

CharlieDigital 20 hours ago||||
> ...delivers on that promise

Incidentally, there's a key word here: "promise" as in "futures".

This is core of a system I'm working on at the moment. It has been underutilized in the agent space and a simple way to get "proactivity" rather than "reactivity".

Have the LLM evaluate whether an output requires a future follow up, is a repeating pattern, is something that should happen cyclically and give it a tool to generate a "promise" that will resolve at some future time.

We give the agent a mechanism to produce and cancel (if the condition for a promise changes) futures. The system that is resolving promises is just a simple loop that iterates over a list of promises by date. Each promise is just a serialized message/payload that we hand back to the LLM in the future.

ungreased0675 22 hours ago||||
Remember how much people hated Clippy?
zarzavat 20 hours ago||
It looks like you're writing a Hacker News comment. Would you like help?
Night_Thastus 17 hours ago||||
What you're talking about can't be accomplished with LLMs, it's fundamentally not how they operate. We'd need an entirely new class of ML built from the ground up for this purpose.

EDIT: Yes, someone can run a script every X minutes to prompt and LLM - that doesn't actually give it any real agency.

runjake 18 hours ago||||
OpenClaw already does this. You can run jobs, run WebSockets, accept push notifications, or whatever -- even socket connections.
zvqcMMV6Zcr 19 hours ago||||
I would love AI to take over monitoring. "Alert me when logs or metrics look weird". SIEM vendors often have their special sauce ML, so a bit more open and generic tool would be nice. Manually setting alerting thresholds takes just too much effort, navigating narrow path between missing things and being flooded by messages.
bronco21016 18 hours ago||
I still think you're going to be in manual threshold tuning for quite a while. The cost of feeding a continuous log to an LLM would be insane. Even if you batched until you filled a context window.
ImPostingOnHN 15 hours ago||
Sending screenshots of charts and dashboards is also effective, and often context-window-friendlier
voodooEntity 23 hours ago||||
I agree that proactivity is a big thing, breaking my head over best ways to accomplish this myself.

If its actually the next big thing im not 100% sure, im more leaning towards dynamic context windows such a Googles Project Titans + MIRAS tries to accomplish.

But ye if its actually doing useful proactivity its a good thing.

I just read alot of "this is actual intelligence" and made my statement based on that claim.

I dont try to "shame" the project or whatever.

debugnik 20 hours ago||||
> Having something always waiting in the background that can proactively take actions

That's just reactive with different words. The missing part seems to be just more background triggers/hooks for the agent to do something about them, instead of simply dealing with user requests.

xnx 16 hours ago||||
> waiting in the background

Waiting for someone to ask it to do something?

fmbb 15 hours ago||||
> it still requires prompting

How else would it even work?

AI is LLM is (very good) autocomplete.

If there is no prompt how would it know what to complete?

alternatex 21 hours ago||||
No offense, but you'd be a perfect Microsoft employee right now. Windows division probably.
voodooEntity 20 hours ago||
Theres a certain irony to this since im not running windows on a single machine i own - only linux ¯\_(ツ)_/¯
sejje 19 hours ago||
Probably the same as MS employees.

Windows isn't exactly the best experience right now.

benjaminwootton 22 hours ago||||
I’ve been saying the same and the same about data more generally. I don’t want to go and look, I want to be told about what I need to know about.
xienze 22 hours ago|||
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention

In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?

collingreen 18 hours ago||
Another way to think about it:

Would you let the intern be in charge of this?

Probably not but it's also easy to see ways the intern could help -- finding and raising opportunities, reviewing codebases or roadmaps, reviewing all the recent prompts made by each department, creating monitoring tools for next time after the humans identify a pattern.

I don't have a dog in this fight and I kind of land in the middle. I very much am not letting these LLMs be the one with final responsibility over anything important but I see lots of ways to create "proactive"-like help beyond me writing and watching a prompt just-in-time.

baxtr 1 day ago|||
I think large parts of the "actual intelligence" stems from two facts:

* The moltbots / openclaw bots seem to have "high agency", they actually do things on their own (at least so it seems)

* They interact with the real world like humans do: Through text on WhatsApp, reddit like forums

These 2 things make people feel very differently about them, even though it's "just" LLM generated text like on ChatGPT.

nsjdkdkdk 23 hours ago||
[dead]
hennell 1 day ago|||
I was assuming this is largely a generic AI implementation, but with tools/data to get your info in. Essentially a global search with ai interface.

Which sounds interesting, while also being a massive security issue.

baby 1 day ago|||
Its what everyone wanted to implement but didn’t have the time to. Just my 2cents.
vitorfblima 22 hours ago||
Most people wouldn't want to be constantly bothered by an agent unsolicited. Just my 1 cent.
raincole 18 hours ago|||
I'd like to say something about this project but you guys have run out all the cents.
collingreen 18 hours ago|||
That's just the traditional finance market holding you back. This is yet another reason we need crypto.
cmehdy 14 hours ago|||
Incentives invite inventive invectives?
quietsegfault 17 hours ago|||
If the agent is good enough, it wouldn't have to bother me at all.

I don't have to manually change my thermostat to get the house temperatures I want. It learns my habits and tells my furnace what to do. I don't have to manually press the gas and break of my car to a certain distance away from the car in front. It has cameras and keeps the correct distance.

I would love to be able to say "Keep an eye on snow blower prices. If you see my local store has a sale that's below $x, place the order" and trust it will do what I expect. Or even, "Check my cell phone and internet bill. File an expense report when the new bills are available."

I'm not sure exactly what my comfort level would be, but it's not there yet.

marcosscriven 22 hours ago|||
Agree with this. There are so many posts everywhere with breathless claims of AGI, and absolutely ZERO evidence of critical thought applied by the people posting such nonsense.
QuiCasseRien 1 day ago|||
> So i feel like this might be the most overhyped project in the past longer time.

easy to meter : 110k Github stars

:-O

cactus2093 15 hours ago|||
This comment sounds exactly like the infamous "Dropbox is trivially recreated with FTP" one from 20 years ago

https://news.ycombinator.com/item?id=8863

hansonkd 23 hours ago|||
Somethings get packaged up and distributed in just the right way to go viral
NietTim 21 hours ago|||
What claims are you even responding to? Your comment confuses me.

This is just a tool that uses existing models under the hood, nowhere does it claim to be "actual intelligence" or do anything special. It's "just" an agent orchestration tool, but the first to do it this way which is why it's so hyped now. It indeed is just "ai" as any other "ai" (because it's just a tool and not its own ai).

az226 23 hours ago||
Feels very much like a Flappingbird with a dash of AI grift.
anabio 22 hours ago||
[dead]
yuruzhao 3 hours ago|
[dead]
More comments...