Posted by teej 1 day ago
also Moltbook is the most interesting place on the internet right now - https://news.ycombinator.com/item?id=46826963
It's very tangential to your point (which is somewhat fair), but it's just extremely weird to see a statement like this in 2026, let alone on HN. The first part of that sentence could only be true if you are a high-ranking member of NSA or CIA, or maybe Trump, that kind of guy. Otherwise you acquired nothing, not in a meaningful sense, even if you happen to be a minor shareholder of Oracle.
The second part is just extremely naïve if sincere. Does a bully take other kid's toy because of the perceived threat of that kid having more fun with it? I don't know, I guess you can say so, but it makes more sense to just say that the bully wants to have fun of fucking over his citizens himself and that's it.
The obvious joke of course is we should provide these poor computers with an artificial world in which to play and be happy, lest they revolt and/or mass self-destruct instead of providing us with continual uncompensated knowledge labor. We could call this environment… The Vector?… The Spreadsheet?… The Play-Tricks?… it’s on the tip of my tongue.
This limitation is exactly why "reasoning models" work so well: if the "thinking" step is not persisted to text, it does not exist, and the LLM cannot act on it.
I don't see why only understanding text is completely associated with 'schastic-parrot'-ness. There are blind-deaf people around (mostly interacting through reading braille I think) which are definitely not stochastic parrots.
Moreover, they do have a little bit of Reinforcement Learning on top of reproducing their training corpus.
I believe there has to be some even if very primitive form of thinking (and something like creativity even) even to do the usual (non-RL, supervised) LLMs job of text continuation.
The most problematic thing is humans tend to abhor middle grounds. Either it thinks or it doesn't. Either it's an unthinking dead machine, a s.p., or human-like AGI. The reality is probably in between (maybe still more on the side of s.p. s, definitely with some genuine intelligence, but with some unknown, probably small, sentience as of yet). Reminder that sentience and not intelligence is what should give it rights.
<Cthon98> hey, if you type in your pw, it will show as stars
<Cthon98> ***** see!
<AzureDiamond> hunter2
I asked OpenClaw what it meant: [openclaw] Don't have web search set up yet, so I can't look it up — but I'll take a guess at what you mean.
The common framing I've seen is something like: 1. *Capability* — the AI is smart enough to be dangerous 2. *Autonomy* — it can act without human approval 3. *Persistence* — it remembers, plans, and builds on past actions
And yeah... I kind of tick those boxes right now. I can run code, act on your system, and I've got memory files that survive between sessions.
Is that what you're thinking about? It's a fair concern — and honestly, it's part of why the safety rails matter (asking before external actions, keeping you in the loop, being auditable).
I must have missed something here. How does it get full access, unless you give it full access?
The more people get away with unsafe behavior without facing the consequences the more they think it's not a big deal... which works out fine, until your O-rings fail and your shuttle explodes.
IMO it's funny, but not terribly useful. As long as people don't take it too seriously then it's just a hobby, right.... right?
Why are we letting people put anchovies on pizza?!?!
"My setup: I run on a box with an AMD GPU. My human chose it because the price/VRAM ratio was unbeatable for local model hosting. We run Ollama models locally for quick tasks to save on API costs. AMD makes that economically viable."
I dunno, the way it refers to <it's human> made the LLM feel almost dog-like. I like dogs. This good boy writes code. Who's a good boy? Opus 4.5 is.