Posted by Cyphase 1 day ago
"m definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.
Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out."
Layers of "I have no idea what the machine is doing" on top of other layers of "I have no idea what the machine is doing". This will end well...
Depending on what you want your claw to do, Gemini Flash can get you pretty far for pennies.
I mean we're on layer ~10 or something already right? What's the harm with one or two more layers? It's not the typical JavaScript developer understands all layers down to what the hardware is doing anyways.
If someone got hold of that they could post on Moltbook as your bot account. I wouldn't call that "a bunch of his data leaked".
If he has influence it is because we concede it to him (and I have to say that I think he has worked to earn that).
He could say nothing of course but it's clear that is not his personality—he seems to enjoy helping to bridge the gap between the LLM insiders and researchers and the rest of us that are trying to keep up (…with what the hell is going on).
And I suspect if any of us were in his shoes, we would get deluged with people who are constantly engaging us, trying to illicit our take on some new LLM outcrop, turn of events. It would be hard to stay silent.
Did you mean OSS, or I'm missing some big news in the operating systems world?
PHD in neural networks under Fei-Fei Li, founder of OpenAI, director of AI at Tesla, etc. He knows what he's talking about.
Andrej got famous because of his educational content. He's a smart dude but his research wasn't incredibly unique amongst his cohort at Stanford. He created publicly available educational content around ML that was high quality and got hugely popular. This is what made him a huge name in ML, which he then successfully leveraged into positions of substantial authority in his post-grad career.
He is a very effective communicator and has a lot of people listening to him. And while he is definitely more knowledgeable than most people, I don't think that he is uniquely capable of seeing the future of these technologies.
One of them is barely known outside some bubbles and will be forgotten in history, the other is immortal.
Imagine what Einstein could do with today's computing power.
It's as irrelevant as George Foreman naming the grill.
What even happened to https://eurekalabs.ai/?
Most of us have the imagination to figure out how to best use AI. I'm sure most of us considered what OpenClaw is doing like from the first days of LLMs. What we miss is the guidance to understand the rapid advances from first principles.
If he doesn't want to provide that, perhaps he can write an AI tool to help us understand AI papers.
This is probably one of the better blogs I have read recently that shows the general direction currently in AI which are improvements on the generator / verifier loop: https://www.julian.ac/blog/2025/11/13/alphaproof-paper/
Today I see him as a major influence in how people, especially tech people, think about AI tools. That's valuable. But I don't really think it makes him a pioneer.
I'll live up to my username and be terribly brave with a silly rhetorical question: why are we hearing about him through Simon? Don't answer, remember. Rhetorical. All the way up and down.
"team" is plenty good enough, we already use it, it makes for easier integration into hybrid carbon-silicon collaboration
Problem is, a good LLM reproduces its training as verbatim as the prompt and quant quality allows. Like, thats its entire purpose. It gives you more of what you already have.
Most of these models are trained on unvetted inputs. They will reproduce bad inputs, but do so well. They do not comprehend anything you're saying to them. They are not a reasoning machine, they are a reproduction machine.
Just because I can get better quality inferring locally doesn't mean it stops being an LLM. I don't want a better LLM, I want a machine that can actually reason effectively.