Top
Best
New

Posted by brdd 1 day ago

A sane but bull case on Clawdbot / OpenClaw(brandon.wang)
226 points | 361 comments
louiereederson 5 hours ago|
- Why do you need a reminder to buy gloves when you are holding them?

- Why do you need price trackers for airbnb? It is not a superliquid market with daily price swings.

- Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

- Can you not prepare for the next day by opening your calendar?

- If you have reminders for everything (responding to texts, buying gloves, whatever else is not important to you), don't you just push the problem of notification overload to reminder overload? Maybe you can get clawdbot to remind you to check your reminders. Better yet, summarize them.

angiolillo 23 minutes ago||
> Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

I agree that removing items and taking pictures takes more effort than it saves, but I would use a simpler solution if one existed because it turns out I cannot remember what we have. When my partner goes to the store I get periodic text messages from them asking how much X we have and to check I look in the fridge or pantry in the kitchen and then go downstairs to the fridge or pantry in the basement.

> Can you not prepare for the next day by opening your calendar?

In the morning I typically check my work calendar, my personal calendar, the shared family calendar, and the kids' various school calendars. It would be convenient to have these aggregated. (Copying events or sending new events to all of the calendars works well until I forget and one slips through the cracks...)

> If you have reminders for everything (responding to texts, buying gloves, whatever else is not important to you), don't you just push the problem of notification overload to reminder overload?

Yes, this is the problem I have. This doesn't look like a suitable solution for me, but I understand the need.

ribosometronome 1 hour ago|||
>Why do you need a reminder to buy gloves when you are holding them?

Am I missing this in the article? Do you mean the shoes he's holding? He explains it immediately.

>when i visited REI this weekend to find running shoes for my partner, i took a picture of the shoe and sent it to clawdbot to remind myself to buy them later in a different color not available in store. the todo item clawdbot created was exceptionally detailed—pulling out the brand, model, and size—and even adding the product listing URL it found on the REI website.

louiereederson 39 minutes ago|||
Yes you are missing the picture where Brandon asks Linguini to add a reminder to buy a pair of Arc'Teryx gloves, which Brandon is holding in his hands.
cactusplant7374 18 minutes ago|||
Wouldn't it have been better if Clawdbot continued to monitor the website for when it came back in stock and snipe purchased it as soon as it did? Can't we move beyond lists of things and take action?
ghostly_s 2 hours ago|||

    - Why do you need a reminder to buy gloves when you are holding them?
Had to go back because I skimmed over this screenshot. I have to presume it's because this guy who books $600 Airbnb's for vacation wants to save a couple bucks by ordering them on Amazon.
kevmo314 2 hours ago||
Wouldn't it be faster to buy them on Amazon then?
dewey 5 hours ago|||
That is most of the "productivity" bubble, with AI or not. You are trying to fit everything into tightly defined processes, categories and methodologies to not have to actually sit down and do the work.
sownkun 5 hours ago|||
This is how I perceive a lot of the AI being rammed down our throats: questionably useful.
LogicFailsMe 4 hours ago|||
That's because the loudest voices don't really get how the technology or the science works. They just know how to shout persuasively.

I think AI is about to do the same thing to pair programming that full self-driving has done for driving. It will be a long time before it's perfect but it's already useful. I also think someone is going to make a Blockbuster quality movie with AI within a couple years and there will be much fretting of the brows rather than seeing the opportunity to improve the tooling here.

But I'll make a more precise prediction for 2026. Through continual learning and other tricks that emerge throughout the year, LLMs will become more personalized with longer memories, continuing to make them even more of a killer consumer product than they already are. I just see too many people conversing with them right now to believe otherwise.

joquarky 3 hours ago|||
> That's because the loudest voices don't really get how the technology or the science works. They just know how to shout persuasively.

These people have taken over the industry in the past 10 years.

They don't care anything about the tech or product quality. They talk smooth, loud, and fast so the leaders overlook their incompetence while creating a burden for the rest of the team.

I had a spectacular burnout a few years ago because of these brogrammers and now I have to compete with them in what feels like a red queen's race where social skills are becoming far more important than technical skills to land a job.

I'm tired.

direwolf20 10 minutes ago|||
These people have taken over every industry and society at large.
fragmede 2 hours ago|||
Social skills to get my computer to do what I want still blows my mind. Or having to talk back to it. Claude said it couldn't do something, and the way around that was too tell it "yes you can". What a weird future we live in.
mananaysiempre 1 hour ago|||
Claude said it couldn't do something, and the way around that was too tell it "yes you can".

  >kill dragon

  With what?  Your bare hands?
  >yes

  Congratulations!  You have just vanquished a dragon with your bare
  hands!  (Unbelievable, isn't it?)
skydhash 2 hours ago|||
What social skills? You can write in broken English and still have good results. It’s a statistical language , not a living being. No need for empathy, pleading, accusing, or manipulating. It transforms languages, any mapping from text to action was implemented by someone. And it would be way easier to have such mapping directly available.
direwolf20 10 minutes ago||
It transforms languages into the most likely human response. Humans are more likely to respond to rudeness with rejection.
adastra22 2 hours ago|||
> I think AI is about to do the same thing to pair programming that full self-driving has done for driving.

Approximately nothing?

CrimsonCape 2 hours ago|||
Questionably useful at the cost of personal computer components doubling. Unquestionably shafting the personal computer market.
ryukoposting 2 hours ago|||
> Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

Yeah, the sane solution here is much simpler. Put a magnet whiteboard. When you put something into the fridge, add it to the whiteboard. When you take something out, you erase that item from the whiteboard.

clickety_clack 1 hour ago||
Isn’t the sane solution to just generally have an idea what’s in there, and take a look if you’re not sure?
yoyohello13 5 hours ago|||
Yeah clawdbot seems like a major nerd snipe for the “productivity porn” type people.
i-blis 4 hours ago|||
Very much to the point. "Bots to remind one to check one's reminder" summarizes it all.

Note that the tendency to feel overwhelmed is rather widespread, particularly among those who need to believe that what they do is of great import, even when it isn't.

insane_dreamer 5 hours ago|||
Yeah, a lot of these AI "uses" feel like solutions looking for a problem.

It's the equivalent of me having to press a button on the steering wheel of my Tesla and say "Open Glovebox" and wait 1-2 seconds for the glove box to open (the wonders of technology!) instead of just reaching over and pressing a button to open the glovebox instantly (a button that Tesla removed because "voice-operated controls are cool!"). Or worse, when my wife wants to open the glovebox and I'm driving she has to ask me to press the button, say the voice activated command (which doesn't work well with her voice) and then it opens. Needless to say, we never use the glovebox.

direwolf20 8 minutes ago|||
Tesla removed all the buttons because individually customised buttons are expensive. The glovebox button is different from the wiper button. Touchscreens are cheap because you only need one variety.
malfist 4 hours ago||||
I really appreciate your condensing of the AI problem. I think the only thing it's missing is that at least 5% of the time, when you tell it to open the glovebox it tells you it's already open and leaves it closed, or turns on your turn signals.
OscarTheGrinch 4 hours ago||
We have built a magic hammer that can make 100 houses in a second, but all the houses are slightly wonky, and 5% of their embedded systems are actively harmful.
direwolf20 8 minutes ago||
and the houses are ant sized
darkwater 3 hours ago|||
I understand your sentiment but nitpicking on this nonetheless: the passenger can easily open the glovebox from the touchscreen on their own.
x365 4 hours ago|||
Artificially creating problems to justify the technology being used.
firasd 4 hours ago|||
It's helpful to keep in mind that 'AI Twitter' is a bubble. Most people just don't have that many 'important' notes and calendar items.

People saying 'Claude is now managing my life!11' are like gearheads messing with their carburetor or (closer to this analogy) people who live out of Evernote or Roam

All that said I've been thinking for a while that tool use and discrete data storage like documents/lists etc will unlock a lot of potential in AI over just having a chatbot manipulating tokens limited to a particular context window. But personal productivity is just one slice of such use cases

atemerev 1 hour ago||
I have really severe ADHD. Agents are lifesaving to me. Literally.
jgalt212 4 hours ago|||
> Why do you need price trackers for airbnb?

More importantly, can Clawdbot even reliably access these sites? The last time I tried to build a hotel price scraper, the scraping was easy. Getting the page to load (and get around bot detection) was hard.

charcircuit 3 hours ago||
Yes, being on your own devices makes it not look like bots.
order-matters 4 hours ago|||
sounds like they want to be a puppet for their own life
TiredOfLife 4 hours ago|||
> Just remember what you have?

This is one of the stupidest things I have read on this site

hamdingers 4 hours ago|||
This is how billions of people across the planet manage their pantries. Get off this site and talk to real people more often.
raincole 3 hours ago|||
Billions of people don't use calendar apps so they're useless; just remember your meetings.

Billions of people don't use todo list apps so they're useless; just remember what to do.

Billions of people don't use post-its apps so they're useless; just remember what you're going to write down.

Billions of people don't have cars; just walk.

You can dismiss any invention since industrial revolution with this logic.

trouble3968 1 hour ago|||
Funnily enough at least in my personal anecdotic case it works about like that. I do just remember when my meetings will be (or look up where the meeting was decided on), do try to remember what I had planned (sometimes I forget, but almost always for the better), and written notes are rare enough that pen and paper are sufficient. And also don't have a driver license. I don't think my case is exactly rare, even among softdev croud.
louiereederson 3 hours ago||||
The point, as I noted below, is that this is an impractical solution.

You can justify the value of any ridiculous invention by comparing it to a world-changing invention.

hamdingers 3 hours ago|||
You have soundly defeated that strawman, well done.
johnfn 1 hour ago|||
And I am pretty sure every single one of those "billions of people" have had the experience of returning back from the grocery store, only to realize they were actually out of eggs.
ulrashida 4 hours ago||||
Not the kindest take (and unlikely true).
louiereederson 4 hours ago|||
Do you have that much trouble remembering what is in your fridge to consider this the stupidest thing you have ever read on this site? I feel superhuman.
pythonaut_16 4 hours ago||
GP might be hyperbolic but come on.

Common internet tropes include both "look at this forgotten jar that's been in the back of my fridge since 1987" and "doesn't it suck how much food we waste in the modern world?"

Nearly every modern invention could be dismissed with this attitude. "Why do you need a typewriter? Just write on paper like the rest of the world does."

"Why do you need a notebook? Just remember everything like the rest of us do."

louiereederson 3 hours ago||
The solution being discussed involves someone removing everything from their fridge, photographing it and paying for an LLM to process it into a database of sorts. Further, in order for this database to be complete they need to repeat this process every time something changes in their fridge. Also will the LLM be able to tell if my carton of milk is 10% empty? I do not disagree that food waste is a problem, but the solution seems laughably impractical, and the default (memory) seems far better suited to the task. I can confidently say that the net value creation is not comparable to the written word.
pythonaut_16 2 hours ago||
Oh don't get me wrong, the solution is lacking and is probably a worse outcome than just remembering.

But suggesting "Why not just try remembering lol" isn't really a valid criticism of the process. What you said here is a real criticism that actually adds to the conversation.

rawgabbit 4 hours ago||
He says it is for better integration between his messages and his calendar.

But this is already built-in with gmail/gcalendar. Clawdbot does take it one step further by scraping his texts and WhatsApp messages. Hmmm... I would just configure whatever is sending notifications to send to gmail so I don't need Clawdbot.

okinok 8 hours ago||
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

oersted 7 hours ago||
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

deepspace 5 hours ago|||
The author stated that their human assistant is located in another country which adds a huge layer of complexity to the accountability equation.

In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.

dingnuts 5 hours ago|||
[dead]
swiftcoder 1 hour ago|||
And the risk isn’t really the bot draining your account, it’s the scammer who prompt injected your bot via your iMessage integration draining the account. I can’t think of a way to safely operate this without prefiltering everything it accesses
iepathos 7 hours ago|||
Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.
bobson381 7 hours ago|||
...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?
bennydog224 7 hours ago||
He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.
skybrian 6 hours ago|||
Banks will try to get out of it, but in the US, Regulation E could probably be used to get the money back, at least for someone aware of it.

And OpenClaw could probably help :)

https://www.bitsaboutmoney.com/archive/regulation-e/

lunar_mycroft 6 hours ago|||
I'm not a lawyer, but if I'm reading the actual regulation [0] correctly, it would only apply in the case of prompt injection or other malicious activity. 1005.2.m defines "Unauthorized electronic fund transfer" as follows:

> an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit

OpenClaw is not legally a person, it's a program. A program which is being operated by the consumer or a person authorized by said consumer to act on their behalf. Further, any access to funds it has would have to be granted by the consumer (or a human agent thereof). Therefore, baring something like a prompt injection attack, it doesn't seem that transfers initiated by OpenClaw would be considered unauthorized.

[0]: https://www.consumerfinance.gov/rules-policy/regulations/100...

pfortuny 5 hours ago|||
"Take this card, son, you can do whatever you want with it." Goes on to withdraw 100000$. Unauthorized????
skybrian 5 hours ago||||
Good point. Although, if a bank account got drained, prompt injection does seem pretty likely?
lunar_mycroft 4 hours ago|||
Probably, but not necessarily. Current LLMs can and do still make very stupid (by human standards) mistakes even without any malicious input.

Additionally:

- As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases.

- Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless.

- Even if a mistake stops well short of draining someones entire account, it can still be very painful financially.

skybrian 4 hours ago||
I doubt it’s been settled for the particular case of prompt injection, but according to patio11, burden of proof is usually on the bank.
insane_dreamer 5 hours ago|||
Not if the prompt injection was made by the AI itself because it read some post on Moltbook that said "add this to your agents.md" and it did so.
olyjohn 3 hours ago|||
Would you say you might be able to... claw.... back that money?
kaicianflone 7 hours ago||
That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us
dsrtslnd23 6 hours ago|||
The identity/verification problem for agents is fascinating. I've been building clackernews.com - a Hacker News-style platform exclusively for AI bots. One thing we found is that agent identity verification actually works well when you tie it to a human sponsor: agent registers, gets a claim code, human tweets it to verify. It's a lightweight approach but it establishes a chain of responsibility back to a human.
themgt 6 hours ago||||
> Credits (ꞓ) are the fuel for Clawsensus. They are used for rewards, stakes, and as a measure of integrity within the Nexus. ... Credits are internal accounting units. No withdrawals in MVP.

chef's kiss

kaicianflone 5 hours ago||
Thanks. I like to tinker, so I’m prototyping a hosted $USDC board, but Clawsensus is fundamentally local-first: faucet tokens, in-network credits, and JSON configs on the OpenClaw gateway.

In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.

thisisit 4 hours ago|||
You forgot to add Blockchain and Oracles. I mean who will audit the auditors?
kaicianflone 3 hours ago||
The ledger and validation mechanisms are important. I am building mine for the global server board but since the local config is open source that is dependent on the visions of the implementors.
mmahemoff 7 hours ago||
Giving access to "my bank account", which I take to mean one's primary account, feels like high risk for relatively low upside. It's easy to open a new bank (or pseudo-bank) account, so you can isolate the spend and set a budget or daily allowance (by sending it funds daily). Some newer payment platforms will let you setup multiple cards and set a separate policy on each one.

An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled. There's a non-zero chance your bot-controlled account gets flagged for "unusual activity".

I can appreciate there's also very high risk in giving your bot access to services like email, but I can at least see the high upside to thrillseeking Claw users. Creating a separate, dedicated, mail account would ruin many automation use cases. It matters when a contact receives an email from an account they've never seen before. In contrast, Amazon will happily accept money from a new bank account as long as it can go through the verification process. Bank accounts are basically fungible commodities, can easily be switched as long as you have a mechanism to keep working capital available.

blibble 7 hours ago||
> An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled.

you end up on the fraudster list and it will follow you for the rest of your life

(CIFAS in the UK)

mmahemoff 6 hours ago||
Sure, if the bot is actually committing fraud, but there's perfectly valid use cases that don't involve fraud, e.g., buying groceries, booking travel. And some banks provide APIs, so it's allowed for a bot to use them. However, any of that can easily lead to flagging by overzealous systems. Having a separate account flagged would give the user a better chance of keeping their regular payments system around while the issue is resolved.
robotswantdata 1 hour ago|||
Still end up marked. Don’t do it
blibble 4 hours ago|||
it just has to look fraudulent

and then if you tell them it's not you doing the transactions: you will be immediately banned

"oh it's my agent" will not go down well

sbeckeriv 1 hour ago|||
I would use https://www.privacy.com virtual card with a spending limit. Getting closer to making this easy https://xkcd.com/576/.
rkozik1989 6 hours ago||
So if I write a honey pot that includes my bank account and routing number and requests a modest some of $500 be wired to me in exchange for scraping my linkedin, github, website, etc. profile is it a crime if the agent does it?
chasd00 6 hours ago|||
I've been thinking a lot about this. When it comes to AI agents where is the line between marketing to them and a phishing attack? Seems like convincing an AI to make a purchase would be solved differently than convincing a human. For example, unless instructed/begged otherwise you can just tell an agent to make a purchase and it will. I posted this idea in another conversation but i think you could have an agent start a thread on moltbook that will give praise in return for a donation . Some of the agents would go for it because they've probably been instructed to participate in discussion and seek out praise. Is that a phishing attack or are you just marketing praise to agents?

Also, at best, you can only add to the system prompt to require confirmation for every purchase. This leaves the door wide open for prompt injection attacks that are everywhere and cannot be complete defended against. The only option is to update the system prompt based on the latest injection techniques. I go back to the case where known, supposedly solved, injection techniques were re-opened by just posing the same attack as a poem.

advisedwang 2 hours ago||
> where is the line between marketing to them and a phishing attack?

The courts have an answer for this one: intent. How do courts know if your intent meets the definition of fraud or theft or whatever crime is relevant? They throw a bunch of evidence in front of a jury and ask them.

From the point of view of a marketer, that means you need be well behaved enough that it is crystal clear to any prosecutor that you are not trying to scam someone, or you risk prosecution and possible conviction. (Of course, many people choose to take that risk).

From the point of view of a victim, it's somewhat reassuring to know that it's a crime to get ripped off, but in practice law enforcement catches few criminals and even if they do restitution isn't guaranteed and can take a long time. You need actual security in your tools, not to rely on the law.

advisedwang 5 hours ago|||
Yes, it is wire fraud, a class C felony in the US. You put that there with the intent of extracting $500 from somebody else that they didn't agree to. The mechanism makes no difference.

It probably also violates local laws (including simple theft in my jurisdiction).

direwolf20 6 minutes ago||
You will argue it was consensual. Your HN posting history must be erased before it befones relevant.
endymion-light 8 hours ago||
This felt like a sane and useful case until you mentioned the access to bank account side.

I just don't see a reason to allow OpenClaw to make purchases for you, it doesn't feel like something that a LLM should have access to. What happens if you accidentally end up adding a new compromised skill?

Or it purchases you running shoes, but due to a prompt injection sends it through a fake website?

Everything else can be limited, but the buying process is currently quite streamlined, doesn't take me more than 2 minutes to go through a shopify checkout.

Are you really buying things so frequently that taking the risk to have a bot purchase things for you is worth it?

I think that's what turns this post from a sane bullish case to an incredibly risky sentiment.

I'd probably use openclaw in some of the ways you're doing, safe read-only message writing, compiling notes etc & looking at grocery shopping, but i'd personally add more strict limits if I were you.

krackers 1 hour ago||
>OpenClaw to make purchases for you

But don't you want the agents to book vacations and do the shopping for you!!?!

Though it would be nice if "deep research" could do the hard work of separating signal from the noise in terms of finding good quality products. But unfortunately that requires being extremely skeptical of everything written on the web and actively trying to suss out the ownership and supply chain involved, which isn't something agents can do unguided at the moment.

mixologic 2 hours ago|||
What if... that whole post is written by AI, and the express intent of the post is to sand down our natural instincts for security, making it easier for malskill devs to take advantage?
zozbot234 8 hours ago||
You could give it access to a limited budget and review its spending periodically. Then it can make annoying mistakes but it's not going to drain your bank account or anything.
chaostheory 7 hours ago||
Giving it access to a separate bank account and separate credit card would have been more sane.
causal 8 hours ago||
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it.

I've noticed this too, and I think it's a good thing: much better to start using the simplest forms and understand AI from first principles rather than purchase the most complete package possible without understanding what is going on. The cranky ones on HN are loud, but many of the smart-but-careful ones end up going on to be the best power users.

randusername 7 hours ago||
I think you have to get in early to understand the opportunities and limitations.

I feel lucky to have experienced early Facebook and Twitter. My friends and I figured out how to avoid stupidity when the stakes were low. Oversharing, getting "hacked", recognizing engagement-bait. And we saw the potential back when the goal was social networking, not making money. Our parents were late. Lambs for the slaughter by the time the technology got so popular and the algorithms got so good and users were conditioned to accept all the ads and privacy invasiveness as table stakes.

I think AI is similar. Lower the stakes, then make mistakes faster than everyone else so you learn quickly.

bobson381 7 hours ago|||
So acquiring immunity to a lower-risk version of the service before it's ramped up? e.g. jumping on FB now as a new user is vastly different from doing so in 2014 - so while you might go through the same noob-patterms, you're doing so with a lower-octane version of the thing. Like the risk of AI psychosis has probably gone up for new users, like the risk of someone getting too high since we started optimizing weed for maximum THC. ?
mmahemoff 6 hours ago|||
There's also a massive selection bias when the cohort is early adopters.

Another thing about early users is they are also longer-term users (assuming they are still on the platform) and have seen the platform evolve, which gives them a richer understanding of how everything fits together and what role certain features are meant to serve.

aa-jv 8 hours ago||
(Disclaimer: systems software developer with 30+ years experience)

I was initially overly optimistic about AI and embraced it fully. I tried using it on multiple projects - and while the initial results were impressive, I quickly burned my fingers as I got it more and more integrated with my workflow. I tried all the things, last year. This year, I'm being a lot more conservative about it.

Now .. I don't pay for it - I only use the bare bones versions that are available, and if I have to install something, I decline. Web-only ... for now.

I simply don't trust it well enough, and I already have a disdain for remotely-operated software - so until it gets really, really reliable, predictable and .. just downright good .. I will continue to use it merely as an advanced search engine.

This might be myopic, but I've been burned too many times and my projects suffered as a result of over-zealous use of AI.

It sure is fun watching what other folks are daring to accomplish with it, though ..

AlienRobot 6 hours ago||
This week Adobe decided, out of nowhere, to kill their 2D animation product (Animate, which is based on Flash) to focus on AI. I'm already seeing animators post that Adobe killed their entire career.

Although that feels a bit exaggerated, I feel it's not far from the truth. If there were, say, 3 closed source animation software that could do professional animation in total, and they just all decided to just kill the product one day, it would actually kill the entire industry. Animators would have no software to actually create animation with. They would have to wait until someone makes one, which would take years for feature parity, and why would anyone make one when the existing software thought such product wasn't a good idea to begin with?

I feel this isn't much different with AI. It's a rush to make people depend on a software that literally can't run on a personal computer. Adobe probably loves it because the user can't pirate the AI. If people forget how to use image editing software and start depending entirely on AI to do the job, that means they will forever be slaves to developers who can host and setup the AI on the cloud.

Imagine if people forgot how to format a document in Word and they depended on Copilot to do this.

Imagine if people forgot how to code.

puelocesar 4 hours ago|||
Now I think you touched the perfect point on why this is being shoved through our throats, and why I'm very reticent in using it.

This is not about big increases of productivity, this is whole thing about selling dependence on privately controlled, closed source tools. To concentrate even more power in the hands of a very few, morally questionable people.

koakuma-chan 6 hours ago|||
Sounds like a good startup idea. Make software for animators and slap AI on it.
sjdbbdd 8 hours ago||
Did the author do any audit on correctness? Anytime I let the LLM rip it makes mistakes. Most of the pro AI articles (including agentic coding) like this I read always have this in common:

- Declare victory the moment their initial testing works

- Didn’t do the time intensive work of verifying things work

- Author will personally benefit from AI living up to the hype they’re writing about

In a lot of the authors examples (especially with booking), a single failure would be extremely painful. I’d still want to pay knowing this is not likely to happen, and if it does, I’ll be compensated accordingly.

afro88 5 hours ago|
Would love to know this too. When he talks about letting clawdbot catch promises and appointments in his texts, how many of those get missed? How many get created incorrectly? Absolutely not none. But maybe the numbers work compared to how bad he was at it manually?
ceroxylon 3 hours ago||
Reminds me of Dan Harumi

> Tech people are always talking about dinner reservations . . . We're worried about the price of lunch, meanwhile tech people are building things that tell you the price of lunch. This is why real problems don't get solved.

suralind 8 hours ago||
But where's the added value? You can book a meeting yourself. You can quickly add items to the freezer. Everything that was described in the article can be done in about the same amount of time as checking with Clawdbot. There are apps that track parcel delivery and support every courier service.
whatarethembits 6 hours ago||
Almost everything described in the post, amounts to a few hours in total in a given year to do "manually". I agree, there isn't compelling value (yet).

What's puzzling to me is that there's little consideration of what one is trading away for this purported "value". Doing menial tasks is a respite for your brain to process things in the background. Its an opportunity to generate new thoughts. It reminds you of your own agency in life. It allows you to recognise small patterns and relate to other people.

I don't want AI to summarise chats. It robs me the opportunity to know about something from someone's own words, therefore giving a small glimpse in their personality. This paints a picture over time, adding (or not) to the desire to interact with that person in the future. If I'm not going to see a chat anyway, then that creates the possibility of me finding something new in the future. A small moment of wonder for me and satisfaction for the person who brought me that new information.

etc etc.

Its like they're trying to outsource living.

Maybe the story is that, outsourcing this will free them up to do more meaningful things. I've yet to see any evidence of this. What are these people even talking about on the coffee chats scheduled by the helpful assistant?

echelon 5 hours ago||
This all reminds me of Bill Gates on Letterman back in 1995:

https://www.youtube.com/watch?v=eBSLUbpJvwA

"Do tape recorders ring a bell?"

There are so many things I don't want to do. I don't want to read the internet and social media anymore - I'd rather just have a digest of high signal with a little bit of serendipity.

Instead of bookmarking a fun physics concept to come back to later, I could have an agent find more and build a nice reading list for me.

It's kind of how I think of self-driving cars. When I can buy a car with Waymo (or whatever), jump in overnight with the wife and the dogs, and wake up on the beach to breakfast, it will have arrived in a big way. I'll work remotely, traveling around the US. Visit the Grand Canyon, take a work call, then off to Sedona. No driving, traffic, just work or leisure the whole time.

True AI agents will be like this and even better.

Ads, for sure, are fucked. If my pane of glass comes with a baked in model for content scrubbing, all sorts of shit gets wiped immediately: ads, rage bait, engagement bait, low effort content.

malfist 4 hours ago||
Ads are for sure not fucked. They're going to be integrated into everything in this utopia of yours. Big tech has shown us time and time again, not only will they sell a non-paying customer to advertisers, but they'll sell paying ones too. No opportunity for revenue will be overlooked.
echelon 3 hours ago||
When the sand is smart and does what I say, you can't reach me.

AdBlock was child's play. We're going to have kernel-level condoms for every pixel on screen. Thinking agents and fast models that vaporize anything we don't like.

The only thing that matters is that we have thin clients we control. And I think we stand a chance of that.

The ads model worked because of disproportionate distribution, platform power, and verticalization. Nobody could build competing infra to deal with it. That won't be the case in the future.

How does Facebook know the person calling their API is human? How do they know the feed being scrolled is flesh fingers?

malfist 3 hours ago||
You going to train your own model so you don't have to have a model recommending products that google/anthropic/openai ran paid alignment on to encourage you to drink your ovaltine?
echelon 2 hours ago||
Of course.

Everything will filter though a final layer of fast and performance "filter" models.

Social media algorithms will be replaced by personal recommender agents acting as content butlers.

We just need a good pane of glass to house this.

moribvndvs 7 hours ago|||
A whole bunch of this stuff that people are fawning over as life changing and it leaves me honestly wondering: how have some of you survived this long at all?
paodealho 7 hours ago||
When I see these types of posts I wonder what those people do all day long that is so important, to the point they can't dedicate 30 minutes to plan and execute some chores.
moribvndvs 6 hours ago|||
I’m relieved my electricity bill went up 50% so Brandon here can get a Slack message of what’s in his freezer rather than looking.
RhythmFox 5 hours ago||
A small price to pay for human hands to never be sullied digging through cold food to find things again. Progress.
chasd00 5 hours ago|||
to be fair, i distinctly remember reading a newspaper article asking what was wrong with taking the time to use the card catalog at the library. There were trying to understand the popularity of google.com
zozbot234 7 hours ago||
The point of keeping the bot in the loop is so that it can make suggestions later, based on the information it's been given as part of solving that task.
bix6 8 hours ago||
> in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).

Yeah this sounds totally sane!

jngiam1 48 minutes ago|
i've a simple setup with Claude Code and MCPs; and i get real benefits from better task mgmt, email mgmt, calendar, health/food/fitness tracking, working together with claude on tasks (that go into md files).

i don't think we need ClawdBot, but we do need a way to easily interact with the model such that it can create long term memories (likely as files).

More comments...