Top
Best
New

Posted by blenderob 1 day ago

Three Inverse Laws of AI(susam.net)
489 points | 326 commentspage 6
the_af 23 hours ago|
I like the suggestion to emphasize the robotic/nonhuman nature of AI. Instead of making it sound friendlier and more human, it should by default behave very mechanistic and detached, to remind us it's not in fact a human or a companion, but a tool. A hammer doesn't cry "yelp" every time you use it to hit a nail, nor does it congratulate you on how good your hammering is going and that maybe you should do it some more 'cause you're acing it!
mplanchard 23 hours ago|
Something that bothers me about the intentional anthropormorphization of the LLM interface is that it asks me to conflate a tool with a sentient being.

The firm expectations and lack of patience I have for any failings in most of my tools would be totally inappropriate to apply to another human being, and yet here I am asked to interact with this tool as though it were a person. The only options are either to treat the tool in a way that feels "wrong," or to be "kind" to the tool, and I think you see people going both ways.

I worry that, if I get used to being impatient and short with the AI, some of that will bleed into my textual interactions with other people.

empath75 22 hours ago||
It inherently imitates people. Even when you ask it to be more robotic, it does it in a way that a human would if you asked them to be more robotic.
sn0n 22 hours ago||
Don’t tell me how to live my life!! LoL
spankibalt 22 hours ago||
> "Humans must not anthropomorphise AI systems."

Not gonna work; people want their fuckbots (or tamagotchis).

ButlerianJihad 17 hours ago||
Firstly, I am no philosopher. How many HN commenters are philosophers, or theologians or qualified to dispute the philosophical realm of A.I.?

One of my teachers called me and my friend "the philosophers" but I'm obviously a rank amateur. I've read no Kant or Nietzsche or Aurelius. I delved into Aquinas only to find that his brain is ten times bigger, and he was using familiar words with unfamiliar connotations.

So I think, we here at HN are poorly-equipped to philosophize and dispute about the nature of consciousness, sentience, intelligence and other "soul-like" attributes that may arise from silicon-based life forms.

However, there is good news. There really are theologians and philosophers working on these thorny issues. Despite being Roman Catholic, I find myself adhering to some form of "transhumanism" [the tradition of Humanism having started with Catholicism] and I grapple mightily to reconcile the cyber-tech-future with morality and tradition and actual human socialization.

Pope Leo has taken on the wars and strife in the world head-on and he's also vaunted to be the "A.I. Pope" because of his concern with this tech. I think all world religions should give serious philosophical/theological thought to these new life-forms, these quasi-sentient things, these "non-existent beings", as defined by a Vatican astronomer.

I don't think atheists will find religion in A.I. but I don't think that Christians or any other person of faith will need to shove God aside in order to accommodate A.I. and electronic life into our society. But we need to come to terms with the reality: these are weighty, powerful things we play with. We harnessed lightning and fire; we changed the courses of mighty rivers; we've flown up through the clouds and shaped mountains in the landscape. A.I. is not a mere bridge or pyramid, it is ensouled somehow; it is animated; it is dynamic.

Now, pardon me while I check out the 6th small aircraft crash in my city this year...

akavel 23 hours ago||
"due to their inherent stochastic nature, there would still be a small likelihood of producing output that contains errors"

This is the part that I find challenging when trying to help my friends build a correct intuition. Notably, the probabilistic behavior here is counter-intuitive: based on human experience, if you meet a random person, they may indeed tell you bullshit; but once you successfully fact-checked them a few times, you can start trusting they'll generally keep being trustworthy. It's not so with "AIs", and I find it challenging to give them a real-world example of a situation that would be a better analogy for "AI" problems.

In my family, what worked (due to their personal experiences), was an example of asking a tourist guide: that even if the guide doesn't know an answer, there's a high chance they'll invent something on the spot, and it'll be very plausible and convincing, and they'll never know. I'm not sure if that example would work for other listeners, though.

I also tried to ask them to imagine that they're asking each subsequent question not to the same person as before, but every time to a new random person taken from the street / a church / a queue in a shop / whatever crowded place. I thought this is a really cool and technically accurate example, but sadly it seemed to get blank stares from them. (Hm, now I think I could have tried asking why.)

Yet another example I tried, was to imagine a country where it's dishonorable, when asked about directions in a city, to say that you don't know how to get somewhere. (I remember we read and shared a laugh at such an anecdote in some book in the past.) Thus, again, you'll always get an answer, and it'll sound convincing, even if the answerer doesn't know. But again, this one didn't seem to work as good as the travel guide one; but for now I'm still keeping it to try with others in the future if needed.

PS. Ah, ok, yet another I tried was to ask them to think of the "game" of "russian roulette". You roll the barrel, you press the trigger, nothing happens. After a few lucky tries, you may get a dangerous, false feeling of safety. But then suddenly you will eventually get the full chamber.

I also tried to describe "AIs" (i.e. LLMs) as taking a shelf of books, passing them through a blender, then putting the shreds in some random order. The result may sound plausible, and even scientific (e.g. if you got medical books, or physics textbooks). The less you know the domain the books were about, the more convincing it may sound, and the harder it is to catch bullshit.

The last two pictures may have gotten some reception, but I'm not super sure, and there was still arguing especially around the books; and again, they were less of a hit than the tourist guide story.

I'm super curious if you have some analogies of your own that you're trying to use with friends and family? I'd love to steal some and see if they might work with my friends!

Tommaso23Sacco 7 hours ago||
[flagged]
Ozzie-D 5 hours ago||
[flagged]
ramchella 18 hours ago||
[flagged]
slickytail 21 hours ago||
[dead]
rotcev 20 hours ago|
[flagged]
More comments...