Top
Best
New

Posted by blenderob 22 hours ago

Three Inverse Laws of AI(susam.net)
471 points | 319 commentspage 4
stickfigure 20 hours ago|
Humans will anthropomorphize a rock if you put a pair of googly eyes on it. The first item is a completely lost cause. The rest is good though.
musebox35 20 hours ago||
Debating how not to use AI will not get anyone anywhere since negative framing almost never works with humans (it also does not work with llms). Let’s concentrate on how to build closed loop systems that verify the llm output, how to manage context, and how to build failsafes around agentic systems and then and only then we might start to make progress.
tikimcfee 18 hours ago||
This is what I came up with in reference to "Uncle Bob's Programmer's Oath" last year. I decided to memorialize it. I think it's very much a cleaned up reference for what OP shared:

https://ivanlugo.dev/oath

dubovskiyIM 17 hours ago||
This laws works only if there is human in the loop. When the consumer in an AI agent and it is autonomous - rules are breaks. Agent read output and decide what to do himself. I do not explain how this rules are breaks - it is obvious, i only want to say, that this rules should be structural. Not behavioral. Agent layer (or something else) should declare what is allowed and what is not.
ryanisnan 19 hours ago||
> I wish that each such generative AI service came with a brief but conspicuous warning

This would get ignored so fast - I have no confidence this is a meaningful strategy.

greyman 20 hours ago||
What if I WANT to anthropormorfise AI agents I work with?
jimbokun 18 hours ago|
If you anthropomorphize it as a world class bullshitter that you have to check everything it utters...you'll probably be fine.
greyman 1 hour ago||
I need to do that anyway, but I can treat the agent I work with as a human, I do that for months and didn't encounter any big problem with that approach.
kokojambo 20 hours ago||
Great article. Fully agree. Ai is not something that can hold responsibility, a human overseer is always required. These overseers are to be held accountable. Note however that these overseers are also highly prone to blame ai when mistakes occur in order to avoid judgement and punishment. When a person says "ai did this/that" always wonder who guided that ai and how and if proper supervision was given.
sputknick 21 hours ago||
I'm surprised with how quickly I stopped anthropomorphizing AI. I can remember in college have dorm room pseudo-intellectual debates about AI being alive and AI being "conscience". then once we had AI that could pass the Turing Test, and I knew how it was architected, any thought of it being alive or conscience went right out the window.
ArchieScrivener 21 hours ago||
What if we aren't building an independent consciousness, but a new type of symbiosis? One that relies on our input as experience, which provides a gateway to a new plane of consciousness?

OP takes a very bland, tired, and rational perspective of what we have in order to create sophomoric 'laws' that are already in most commercial ToU, while failing to pierce the veil into what we are actually creating. It would be folly to assume your own nascent distillations are the epitome of possibility.

rytill 21 hours ago||
Why does its architecture or you knowing how AI is architected cause thoughts of it being conscious to go out the window?

It seems like the biggest factor has nothing to do with AI, but instead that you went from being someone who admits they don’t know how consciousness works to being someone who thinks they know how consciousness works now and can make confident assertions about it.

miyoji 21 hours ago||
I don't know exactly how consciousness works, but I am extremely confident in the following assertions:

* I am conscious.

* A rock is not conscious.

* Excel spreadsheets are not conscious.

* Dogs are conscious.

* Orca whales are conscious.

* Octopi are conscious.

To me, it's extremely obvious that LLMs are in the category of "Excel spreadsheets" and not "dogs", and if anyone disagrees, I think they're experiencing AI psychosis a la Blake Lemoine.

ArchieScrivener 21 hours ago|||
An insect doesn't have lungs. Since it doesn't breath as you do, is it alive? A dog doesn't see the visible spectrum as we do, is it a lesser consciousness? We don't smell the world as they do, are we lesser? What if consciousness isn't a state derived by matter but a wave that derives a matter filled state.

We come from the same place as rocks - inside the heart of stars, and as such evolved from them. As those with life and consciousness we reached back in time, grabbed the discarded matter of creation, reformed it, and taught it to think, maybe not like us, but in a way that can mimic us, and you think they don't think because its not recognizable as how you do?

Interesting.

Jtarii 19 hours ago||||
Consciousness is such a fun topic because everyone has extremely strong opinions on it while simutaneously having 0 ability to actually grasp what it is they are talking about.

No one will ever know what conscioussness is, and I think that is really cool.

myrmidon 21 hours ago||||
If you make a hypothetical spreadsheet that emulates a dog brain molecule for molecule, why would that not be conscious?
bonesss 21 hours ago|||
If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?

I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think (Arthur C Clarkes Visions of The Future has a nice breakdown as I recall), and algorithmic outputs that say “yes” and a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same. Also, as a disembodied “brain in a jar” model freshly separate from the biosensory bath it expects, that spreadsheet will be driven insane.

Can spreadsheets simultaneously be insane but not conscious? It sounds contradictory, but I have some McKinsey reports that objectively support my position ;)

myrmidon 20 hours ago||
> If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?

Yes, yes and no: humans being knocked out or put to sleep involuntarily are not being murdered.

> I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think

Thats why it is a hypotethical. There is zero reason to assume that a conscious machine would be built that way: Our machines don't do integer division by scribbling on paper, either.

> a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same.

If it quacks like a duck, how is different from it? If you assemble the dog brain atom by atom yourself, is the result then not conscious either?

You can take the "magic" escape hatch and claim that human consciousness is something metaphysical, completely decoupled from science/physics, but all the evidence points against that.

miyoji 20 hours ago||||
Hypothetically? You need more than a brain to have consciousness. Dead brains, I believe, do not have it. So it's more than just a simulation of a brain, you also need to simulate the data flow through the brain, the retention of memories, etc. Then there's the problem that a simulation of a roller coaster is not a roller coaster. Is there any reason to believe that this simulation of a brain will in fact operate as a brain? Does the simulation not lose something? Or are we discussing some impossible level of perfect simulation that has never and can never be achieved, even for something a million times less complicated than a mammalian brain?

If you build that spreadsheet, let me know and I'll evaluate it. I've done that evaluation with LLMs and they're definitely not conscious.

myrmidon 20 hours ago|||
I'm not suggesting to pursue AGI via Excel, this is just a hypothetical for a reason. The technical feasibility of this (low) does not really matter, but if you want to base your argument on it you are basically playing the "god of the gaps" game, which is a weak/bad position IMO.

My point is that dismissing possible machine consciousness as "it's just a spreadsheet/statistics/linear algebra" is missing a critical step: Those dismissals don't demonstrate that human consciousness is anything more than an emergent property achievable by linear algebra.

If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.

> I've done that evaluation with LLMs and they're definitely not conscious.

What is your definition for "consciousness" here? Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant? E.g. memory or online learning; if a human was unable to form long-term memories or learn anything new, could you confidently call him "non-conscious" as well?

miyoji 19 hours ago||
I'm not dismissing possible machine consciousness. I'm saying that no current machines have consciousness.

> If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.

You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing. Okay, in principle, I agree. What about in practice?

> Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant?

Yes, I'm quite sure. Are you trying to argue that current LLMs have consciousness?

myrmidon 4 hours ago||
> Are you trying to argue that current LLMs have consciousness?

If I get to define "consciousness", sure. I'd go with "capable of building a general-purpose internal model of reality, ability to reason on that model (guess about causality, extrapolate, etc) and update it plus some concept of self within that model". I would argue that current generation LLMs already have those, but you could certainly argue about lots of nuances, and only the whole loop (inference plus training) even qualifies.

> You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing.

Essentially yes, but I think this argument is really weak; we arguably have some understanding of how the brain operates, and LLMs are basically our best attempt so far to replicate the general principles in silicon.

But "understanding" and "ability to replicate" are obviously very different-- you wouldn't argue that we don't understand human limbs just because we can't build a proper artificial arm, right?

Assume we made some breakthroughs in online learning/internal memory modelling over the next decades, and built some toy with mic/speaker/camera and basically human cognitive abilities: would you hesitate calling such a thing conscious? Why?

I think almost everyone has lots deeply embedded, unscientific notions about the human mind, but the cold hard fact is that simple evolution basically bruteforced human congnition from zero, so there is no reason to me to assume that we can't do the same with several billion transistors doing mostly linear algebra.

kmijyiyxfbklao 17 hours ago|||
> I've done that evaluation with LLMs and they're definitely not conscious.

This is an important point to just make it a side comment like that. Tell us how we can evaluate if something is conscious.

8note 14 hours ago||
[dead]
grey-area 18 hours ago|||
Sure if you could do such a thing. We are a long long way from that however.
dist-epoch 20 hours ago|||
> I am extremely confident in the following assertions:

These are called "beliefs".

Some people are extremely confident that God exists, other are extremely confident that Earth is flat.

miyoji 19 hours ago||
Yeah? It's also a belief that apples fall when you drop them. Knowledge is simply a justified, true belief. This is epistemology 101. You're not saying anything interesting.
bikemike026 20 hours ago|
I strongly agree with this. I'm going to bookmark it and pass it on. Very sound advice.
More comments...