Top
Best
New

Posted by modeless 2 days ago

As Rocks May Think(evjang.com)
85 points | 82 commentspage 2
mynameisjody 2 days ago|
I'm still waiting for one of these articles to be written by someone without something to be directly gained by the hype. Eric Jang, VP of AI at 1X.
johnfn 2 days ago|
The previous post in this blog is titled "Leaving 1X". So your wait is over!
Paracompact 2 days ago|||
Very, very likely he is remaining in the AI space.
RealityVoid 2 days ago|||
Unrelated, but it seems his previous company, 1x, was initially named Halodi and was located in Norway. And eventually, it was moved with all employees in Silicon Valley. How the hell does that work? That sounds like a logistical nightmare.Do you upend all those people's lives? Do you fire those who refuse? How many norwegians even want to go to the US? Sounds crazy to me.
xg15 2 days ago|||
Did they actually move or is it just a "remote-first" company now?

(Or even just registered in SV but still physically in Norway?)

Edit: Seems like a mix of all of it:

> I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas.

jrmg 2 days ago|||
How do they all get work visas?
xyzsparetimexyz 2 days ago||
[flagged]
zozbot234 2 days ago||
Nah, the ugliest prose is clanker prose and this definitely isn't. This stuff comes 100% from an actual carbon-based lifeform.
akovaski 2 days ago|||
I think that Gemini regularly generates inane metaphors like the above. As an example, here's a message that it sent me when I was attempting to get it to generate a somewhat natural conversation:

----

Look, if you aren't putting salt on your watermelon, you’re basically eating flavored water. It’s the only way to actually wake up the sweetness. People who think it’s "weird" are the same ones who still buy 2-in-1 shampoo.

Anyway, I saw a guy at the park today trying to teach a cat to walk on a leash. The cat looked like it was being interrogated by the FBI, just dead-weighting it across the grass while he whispered "encouragement."

Physical books are vastly superior to Kindles solely for the ability to judge a stranger's taste from across a coffee shop. You can’t get that hit of elitism from a matte gray plastic slab.

----

This was with a prompt telling it to skip Reddit-style analogies.

wtetzner 2 days ago||
I buy 3-in-1 shampoo, conditioner and body wash.
beeflet 2 days ago|||
Who is the wise guy that gave water the ability to think
appellations 2 days ago|||
Author is Vice President of AI, 1X Technologies.
kagol 2 days ago|||
Curious about the root of your distaste. Just a bad analogy/visualization?
netsharc 2 days ago||
The article goes from philosophical (what AI will do to society) to jargony blowhard and then even deeper look (I think. I flicked my thumb past several screens of text), and back out again.

Come on author, learn to write properly. Or tell your LLM to not mix a philosophical article with a technical one.

kalterdev 2 days ago|
> Chief among all changes is that machines can code and think quite well now.

They can’t and never will.

johnfn 2 days ago||
Are you really claiming that there isn't a machine in existence that can code? And that that is never possible?
kalterdev 2 days ago|||
It can code in an autocomplete sense. In the serious sense, if we don’t distinguish between code and thought, it can’t.

Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.

Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”

johnfn 2 days ago|||
> It can code in an autocomplete sense.

I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.

emp17344 1 day ago|||
That’s neat, but it’s important to note that agentic systems aren’t just comprised of the LLM. You have to take into account all the various tools the system has access to, as well as the agentic harness used to keep the LLM from going off the rails. And even with all this extra architecture, which AI firms have spent billions to perfect, the system is still just… fine. Not even as good as a junior SWE.
kalterdev 2 days ago|||
That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.
johnfn 1 day ago||
Yes, I think it is probably a question of semantics. I imagine you don't really take issue with the "they code" part, so it's the "they think" thing that bothers you? But what would you call it if not "thinking"? "Reasoning"? Maybe there is no verb for it?
libraryofbabel 2 days ago|||
Some of that is true, sure, but nobody who claims LLMs can code and reason about problems is claiming that they operate like humans. Can you give concrete examples of actual specific coding tasks that LLMs can’t do and never will be able to do as a consequence of all that?
kalterdev 2 days ago||
I think it can solve about any leetcode problem. I don’t think it can build an enterprise-grade system. It can be trained on an exiting one but these systems are not closed and no past knowledge seems to predict the future.

That’s not very specific but I don’t have another answer.

wavemode 2 days ago|||
I think "quite", "well", and "now" are the objectionable parts of the quote.