Top
Best
New

Posted by kevinak 19 hours ago

Living human brain cells play DOOM on a CL1 [video](www.youtube.com)
190 points | 190 comments
XCSme 6 minutes ago|
I don't know, it looks like the neurons are triggering quite randomly. Also they didn't fully explain the reward mechanism.
sd9 12 hours ago||
If this can be taken at face value... it's creepy.

I get that they're doing it for the meme. But perhaps something getting close to human intelligence, made out of human cells, shouldn't be forced to play a violent video game without any alternative options? Does 'the meme' justify that?

I dunno. Nothing against violent games myself. Just feels like it's starting to get quite questionable, ethically speaking.

red_hare 10 hours ago||
The truth is, God really gave 11 commandments.

It's just "Thou shalt not grow a brain in a test tube and force it to play a 1993 shooter" didn't make any sense to Moses and therefore didn't make the editors cut.

ycombinete 27 minutes ago|||
To be pedantic he actually gave 613 commandments.
jagged-chisel 8 hours ago|||
One of those five he dropped.
khazhoux 1 hour ago|||
"And keep 'em up!"

"An old man! They don't let you live, they don't let you breathe!"

polynomial 6 hours ago|||
Tragically this reference is all but lost generationally.
acuozzo 5 hours ago||
Born in 1988. It wasn't lost on me. Am I old now too?
killermouse0 2 hours ago|||
Born in 1979 but I don't get it. What is it about?
jasomill 2 hours ago||
Mel Brooks' History of the World, Part I[1].

[1] https://www.youtube.com/watch?v=-8ihcq4hzR4

ytoawwhra92 11 hours ago|||
It is creepy, I agree.

I saw this article over the weekend and felt similarly: https://theinnermostloop.substack.com/p/the-first-multi-beha...

> Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move.

And the simulated world they put it in is a sort of purgatory-like environment.

IshKebab 11 hours ago|||
It's 200k neurons. Less than an ant has. Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.

Still I don't understand why they would invite the extra creepy factor of using human brain cells rather than e.g. mouse brain cells. Surely it makes no difference biologically but it's going to lead to fewer comments like this.

perching_aix 9 hours ago|||
> yeah definitely not

I don't know about ants, but after a refresher on the people favorite fruit fly, I'd be hard pressed to be so dismissive - 200K seems to be plenty: https://news.ycombinator.com/item?id=47302051

I inspire you to look up what is known about fruit flies' behavior.

The reason it's probably nevertheless not as messed up as people might assume it to be is specifically because it's an organoid, not an actual brain. Which is to say, it has the numbers but not the performance, not by a long shot.

> Surely it makes no difference

It absolutely should, though specifically with organoids, I guess it might not. Ironically, I would expect the ethics angle to be actually worse with small animals. The size of the organoid will be closer to the real thing comparatively, after all, so more chances of it gaining whatever level of sentience the actual organism has.

But then this will be heavily muddled by what people believe consciousness is and whether or how humans are special, I suppose.

IshKebab 1 hour ago||
> so more chances of it gaining whatever level of sentience the actual organism has

Yeah but people have no problems experimenting on actual fully working mice already.

perching_aix 36 minutes ago||
Yes *, and in the real world. The question then is if you rate that to be an equivalent existential horror to being a varyingly maldeveloped, malnutritioned, disembodied version of those mice, forced to live out life in a low fidelity version of the Matrix [0], potentially in constant or recurring agony.

* They kinda do have a problem with that too, that's why ethics committees exist, and why the term "animal testing" pops up in the news cycle every so often.

[0] https://xcancel.com/alexwg/status/2030217301929132323

ytoawwhra92 11 hours ago||||
> if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.

I'm not imagining that (although one assumes their plan is to scale this up), but nonetheless there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.

Of course we (as a species) have a long history of doing horrible things to living creatures in the name of science and progress.

These stories evoke a different feeling for me, though.

fgfarben 6 hours ago||
> there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.

How do we communicate this to the engineers at YouTube who refuse to make an offramp for children from the infinite baby shark AI video loop?

kdheiwns 7 hours ago||||
Elephants have 3x the neurons of a human. Bees have about a million and they have complex relationships, emotions, and can remember the faces of humans. Neuron counts correspond more to body size than actual cognitive abilities.

And brains are pretty complicated in how they're arranged. A large portion of the brain basically serves as an operating system of sorts, just managing breathing, moving, detecting smells, producing language, decoding language, etc. Cut all of that out and we're left with thinking and emotions.

IshKebab 1 hour ago||
I don't think it works like that. Most likely high intelligence & consciousness requires both a large number of neurons and wiring them up in a specific way.

If you have a small number (200k is tiny) you aren't going to achieve consciousness.

callmeal 11 hours ago||||
>Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.

I don't know if it knows it's in doom - looks like all it knows is to shoot when startled. More than creepy imo.

lambdaphagy 6 hours ago|||
Given that no one understands how the mental relates to the physical in the first place, I have no idea how you would reach such a confident conclusion about the phenomenological status of 200k human neurons in a petri dish playing Doom?
rixed 3 hours ago||
But we do understand where overconfidence usually come from, don't we?
soco 1 hour ago|||
I have no mouth, and I must scream (https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...)
stared 2 hours ago|||
One take is that we made human brain cells to live in hell. On the flip side, we gave them a super shotgun.
whycome 7 hours ago|||
Maybe you're a brain in a jar somewhere being forced to live this life you're living.
none2585 5 hours ago||
Sure would explain a lot
throw310822 2 hours ago|||
Funny though how many are dismissive of trillion-synapses brains that can understand and speak tens of languages, write decent code, discuss history and philosophy, solve math problems...

And then are creeped by 200k neurons that barely find a target when they're told where it is.

You can probably train an ANN with only a few hundred neurons at most to do the same.

firtoz 5 hours ago|||
Would it be able to distinguish between violent or not? Would it be suffering or not? What exactly does it get in terms of signals? Does it even, "experience" anything? Is it even an "it"?
nurettin 1 hour ago|||
Yeah, people get shot/stabbed/"fall off a building by accident" every day and we should be considerate of the feelings of a petri dish.
Razengan 7 hours ago|||
> Just feels like it's starting to get quite questionable

There's no way the technology to make and modify "life" including cloning humans hasn't been secretly used or attempted at least once ever since it was discovered.

wonderwonder 9 hours ago|||
How else are they going to train the pilot wetware for the AI robot army?
altmanaltman 4 hours ago|||
I mean, it's nowhere close to human intelligence, and it's still not a sentient being, so it cannot be "forced" to do anything, even if we take it at face value.

As for being creepy, the things humans do to other actual sentient beings are exponentially more horrifying and creepy than making them play computer games. If the monkeys that Volkswagen tortured with their exhaust gases were made to play Doom, that would be a much better world. And they are much, much closer to human-level intelligence than this chip.

Ethically speaking, it got "questionable" way long ago; this is not a valid concern for this project imo.

echelon 7 hours ago|||
> it's creepy.

It's awesome.

People's ick around bodies, which are machines, have always held us back.

It wasn't until we started cutting them open that modern medicine was developed.

We might have brain uploads already had we not been so averse to sticking brains with electrodes.

I'll go further: had we not been so scared of cloning, we'd probably have cured cancer and every major ailment if we'd begun cloning monoclonal human bodies in labs. Engineered out the antigens and did whole head transplants. You could grow them without consciousness or deencephalize them, rapidly grow them in factories, and have new blood / tissue / organ / body donors for everyone.

New young bodies means no more cancer, no more cardiac or pulmonary age. It's just brain diseases left as the final frontier once we cross that gap. And if we have bodies as computers and labs, we'd probably make quick work on that too.

Too tired to lay out the case / refute, so past discussions:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

akomtu 7 hours ago||
Sounds like a high tech hell.
echelon 7 hours ago||
High tech hell is reversing the light cone, pulling everyone who ever lived throughout history back into consciousness by simulating them at the neurotransmitter level, and then forcing them into actual hell / torture simulators with no way to die. All without consent, mind you.

That's also sci-fi. I hope.

What I described before - using clonal technology to solve nearly every disease - is a medical miracle that will vastly improve the state of people's lives throughout the world.

teiferer 4 hours ago|||
The two scenarios come in a package though. If you make one possible, the other one comes for free.
samus 5 hours ago|||
The same technology can also be used to force people to live with bodies engineered to make their existence a living hell. Similar things can be done with brain uploads.
varispeed 8 hours ago|||
The thing should watch cats.
Barrin92 8 hours ago||
>But perhaps something getting close to human intelligence

this isn't getting close to human intelligence. They're using about as many cells as a fruit fly has (of course not actually functioning like an animal brain) processing signals to play Doom. The treatment of a single farm chicken is about a few magnitudes more worrying than this.

I'm sorry to tell you that you're made out of human cells and I don't think you got consent from each brain cell before firing up the old boomer shooters.

neom 17 hours ago||
It seems a bit more complicated than first blush: https://www.rdworldonline.com/the-neurons-playing-doom-are-a...

Personally, dislike this direction a lot. I don't like that they're using a killing game (I understand the trope, doesn't make me like it any less) and the general idea of this whole thing makes me quite uneasy.

sunir 17 hours ago||
Do you feel like you have no mouth and you must scream?
oersted 11 hours ago||
> The neurons serve as a biological filter: the training system translates screen pixels and ray-cast distances into electrical zaps, the living cells fire spikes, and those counts feed straight into a PyTorch decoder that maps them to Doom actions. The PPO agent, CNN encoder and entire reward loop run on ordinary silicon elsewhere. Cole’s ablation modes make the split testable, set decoder output to random or zero and the game still plays. The CL1 hardware interface works exactly as advertised. What remains unproven is whether 200,000 human neurons can ever carry the policy instead of just riding along.

Yeah… That’s quite the smoking gun.

So it’s quite likely then that the neurons are just acting as a bad conductor. The electrodes read a noisy version of the signals that go into the neurons, and they just train a CNN with PPO to remove that noise, get the proper inputs, and learn a half-decent policy for playing the game.

If this worked as advertised they shouldn’t need a CNN decoder at all! The raw neuron readout should be interpreted as game inputs directly.

Besides, they are not streaming the video into the neurons at all. Just the horizontal position of the enemies and the distance, or some variant of that. In that sense it’s barely more than pong isn’t it? If enemy left, rotate left, if enemy right, rotate right, if enemy center shoot. At a stretch, if enemy far, go forward, if enemy close, go back. The rest of the time just move randomly. Indeed, the behavior in the video is essentially that…

While we are at it, the encoded input signal itself is already pretty close to a decent policy if mapped directly to the keys (how much enemy left, center, right), even without any CNN, PPO or neurons.

EDIT: It seems like the readme does address these concerns, and the described setup differs significantly from the description in the critical blogpost. Still not entirely convincing to me, a lot of weights being trained in silicon around the neurons, but it sounds better. I don’t have time right now to look deeper into it. They outline some interesting details though.

> Quote from: https://raw.githubusercontent.com/SeanCole02/doom-neuron/mai...

Isn't the decoder/PPO doing all the learning?

No, this is precisely why there are ablations. The footage you see in the video was taken using a 0-bias full linear readout decoder, meaning that the action selected is a linear function of the output spikes from the CL1; the CL1 is doing the learning. There is a noticeable difference when using the ablation (both random and 0 spikes result in zero learning) versus actual CL1 spikes.

Isn't the encoder/PPO doing all the learning?

This question largely assumes that the cells are static, which is incorrect; it is not a memory-less feed X in get Y machine. Both the policy and the cells are dynamical systems; biological neurons have an internal state (membrane potential, synaptic weights, adaptation currents). The same stimulation delivered at different points in training will produce different spike patterns, because the neurons have been conditioned by prior feedback. During testing, we froze encoder weights and still observed improvements in the reward.

How is DOOM converted to electrical signals?

We train an encoder in our PPO policy that dictates the stimulation pattern (frequency, amplitude, pulses, and even which channels to stimulate). Because the CL1 spikes are non-differentiable, the encoder is trained through PPO policy gradients using the log-likelihood trick (REINFORCE-style), i.e., by including the encoder’s sampled stimulation log-probs in the PPO objective rather than backpropagating through spikes.

hithre 2 hours ago||
Surely it can only be fake. How can it be legal?

But seeing so many people from the hacker news community reacting to it as normal or exiting is troubling. This is obviously breaching the limits of ethics.

jeffybefffy519 1 hour ago|
Cortical labs have done this before, its their whole thing…
zeroq 13 hours ago||
I literally can't wait for this petri dish to learn how to interact with LLMs and start vibe coding JS libraries.
kakapo5672 10 hours ago||
What if the braincell-vibe JS libraries turn out pretty much identical to the legacy human JS libraries, aside from being better-commented. That might lead to an existential crisis for some folks.
polynomial 6 hours ago|||
"Petri dish rewrites React in Rust"
otabdeveloper4 13 hours ago||
Old news. Google "my dog vibecoded a game".
sfblah 2 hours ago||
Big deal. I had a set of human brain cells playing DOOM in the 1990s.
sva_ 11 hours ago||
I feel like they probably could use another mammals neural cells and get similar results, but they use human cells because it'll get them attention - and that kind of rubs me the wrong way.
ethmarks 7 hours ago||
Counterpoint: a major use case for this technology would be to experiment on human brain structures to research and hopefully cure neurological diseases like Alzheimer's. If you want to cure Alzheimer's in humans, you might as well use human brain cells from the start.

But yes, I agree that they're likely using human brain cells mainly because it's attention-getting.

kdheiwns 6 hours ago||
A more likely and immediate use case is having these mini humans autonomously pilot drones in which they'll kill big humans.
hellzbellz123 1 hour ago||
I could see the current admin using this as some sort of sick workaround to ethics. Not that they seem to care in the first place
w4der 27 minutes ago||
Keep in mind that this is an Australian startup, and they already have some publications out on the ethics of doing this.
hinkley 10 hours ago|||
Whoever thought people would become Dr Frankenstein for the karma.
nomoreusernames 11 hours ago||
[dead]
aw124 58 minutes ago||
The usage of human brain cells for unethical experimentation, except when trying to find cures for diseases, is not only a multiplication of suffering (even on the cellular level) but also creates a new baseline for other labs which will follow this path by example. It's a ridiculous misuse of scientific capacity for evil purposes. IMHO.
tgv 3 minutes ago|
The saddest part: it's for the money.
sillysaurusx 17 hours ago||
Be sure to dig into the details before taking this at face value. There once was a story "Rat brain flies plane" a couple decades ago, and it turned out to be bogus. But to find that out, you had to read the paper and reverse engineer that nothing substantial was actually going on. It's tempting to be charitable, but you can't really know whether headlines like this are legit till you understand exactly what they did.

(The rat brain guys repeated the experiment until the plane stopped crashing, but no "learning" was happening; it was expected that when the neuron's range reached so-and-so, that the plane would fly level. So they started with a neuron outside that range, showed that it crashed, then adjusted the neuron until it flew level. But that's not what "rat brain flies plane" implies.)

birdsongs 17 hours ago||
I looked into it. They're not feeding the framebuffer to the neurons, but have a "signal" when an enemy is on screen to some of the tissue's inputs, and how to locate it in the x/y axis, and have outputs for the character to turn right or left or fire.

It's "see this input signal, send these output signals", which seems consistent with the title.

It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.

We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?

It's legitimately horrifying to me.

nextaccountic 15 hours ago|||
> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it?

If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it

> What are the plans to scale up?

I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.

kpil 14 hours ago|||
I think "MMAcevedo" basically nails it: https://qntm.org/mmacevedo
gattr 13 hours ago|||
I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).

Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).

Tzt 13 hours ago||
>Greg Egan's "Rifters"

By Peter Watts actually.

gattr 13 hours ago||
Yes, sorry! I like them both a lot.
bspammer 1 hour ago|||
I had a genuine feeling of dread reading that, wow.
fgfarben 6 hours ago||||
> the first step is to embrace veganism

The past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?

lachs2k 1 minute ago|||
For the same reason that we now consider murder, assault and other actions that harm people morally wrong. These have also been a part of life ever since humans or other hominids roamed the earth, we just determined that they are morally wrong later on.
giladvdn 1 hour ago|||
For me the issue isn't with the killing/eating of animals. Rather, it's how they are treated during their lifetime by the meat industry - which is essentially optimizing for the minimum conditions that can still provide meat that can be sold legally. I'm not a vegan by the way, but I can appreciate the moral case vegans make.
birdsongs 17 hours ago||||
Replying to myself: how long before one of these with the neuron count of a corvid and trained on pattern recognition gets plugged into a drone?

This is a very dark path, and I could not trust the people in charge less.

perching_aix 12 hours ago|||
In a sense humanity has already done that, just with a lot more of the given animal intact and less hi-tech: https://en.wikipedia.org/wiki/Project_Pigeon

Not an endorsement or a condemnation, just something I learned of recently and found surprising.

DrewADesign 17 hours ago||||
I’m kind of sick of how readily the non-managerial tech world accepts “what happens is someone else does this immoral thing before us?!” rhetoric as a real answer to questioning whether or not we should contribute our talent and ideas to something that we, deep down, know is bad for fellow humans.
Chris2048 17 hours ago||
> rhetoric as a real answer

Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

> we, deep down, know is bad

this feels like real rhetoric.

DrewADesign 14 hours ago||
> Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:

> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion

The business writing class I took in college was called Business Rhetoric. It’s not a bad word.

If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.

> this feels like real rhetoric.

Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.

It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.

Chris2048 13 hours ago||
> It’s not a bad word.

When used in the negative sense it is, per https://dictionary.cambridge.org/dictionary/english/rhetoric

"disapproving -> clever language that sounds good but is not sincere or has no real meaning"

Are you implying you mean something other than this sense of the word?

Chris2048 17 hours ago|||
Why is that the concern of the authors of this paper?
LtWorf 13 hours ago||
Why wouldn't it be? They worked on it.
bondarchuk 16 hours ago||||
200k now, reasonably speaking a few million is within reach, which is reptile/fish range, the terrifying thing is though that if they train this to imitate humans (which they will) who knows how many orders of magnitude of efficiency gains you get (in terms of neurons needed for a certain level of consciousness) versus natural organisms that are dependent on natural evolution and need to support other bodily functions basically irrelevant to consciousness.
Retric 14 hours ago||
It seems unlikely that we would be more efficient at achieve consensus than evolution which can hand craft neural structures via feedback loops across millions of generations.

Especially when this demo needs 200k neurons when organizations with vastly fewer neurons have more complex behaviors.

fc417fc802 8 hours ago||
The problem with that logic is that evolution iteratively builds on top of old systems. The foundations are often remarkably crufty.

My favorite concrete example is "unusual" amino acids. Quite a few with remarkably useful properties have been demonstrated in the lab. For example, artificial proteins exhibiting strength on par with cement. But almost certainly no living organism could ever evolve them naturally because doing so would require reworking large portions of the abstract system that underpins DNA, RNA, and protein synthesis. Effectively they appear to lie firmly outside the solution space accessible from the local region that we find ourselves in.

I agree with your second point though that this system is massively more complex than necessary for the behavior demonstrated.

semi-extrinsic 1 hour ago||||
> They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

From the video, my impression was "we have yet to figure out an effective way to reward/punish, this is just a PoC of the interface"

perching_aix 12 hours ago||||
> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness?

Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.

Fruit flies have ~140,000 neurons.

The catch is that these brain-on-a-substrate organoids are nothing like actual structured, developed brains. They're more like randomly wired-together transistors than a proper circuit, to use an analogy.

So even though by the numbers they'd definitely have the potential to be your nightmare fuel, I'd be surprised if they're anywhere close in actuality.

readitalready 13 hours ago||||
Yah this is gonna be a no for me too and crosses the line into actual life, instead of artificial intelligence.

We don't need to be experimenting on people, regardless of how many brain cells they may have.

There was a case a few years back about a parasitic twin attached to an Egyptian baby that had to be removed. It had a brain and semblance of a face, but nothing else. But when removing it, they gave it a name, because it was a person.

jmusall 16 hours ago||||
It is horrifying. OTOH, we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products. I'm not saying this makes it okay to create a conscious brain in a dish. But maybe thinking a little more about what constitutes consciousness and how we want to protect it from harm can also bring about some desperately needed change in some other questionable human activities.
fgfarben 6 hours ago|||
> we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products

We do the same thing to plants. Why do you have no qualms about killing plants to eat the food they accumulated for their young?

A grain of wheat and a chicken egg are evolutionarily and nutritionally, maybe even ontologically, indistinguishable from one another.

lachs2k 7 minutes ago|||
I am not aware of any plants that show signs of consciousness or feelings. This would even by disadvantageous to many plants because they "want" parts of them to be eaten to disperse seeds, pollen, etc.

Even if you accept that plants might be conscious and their suffering has to be reduced, you would still harm way fewer plants by eating them directly instead of eating other animals that consume them.

https://en.wikipedia.org/wiki/Trophic_level

vjerancrnjak 3 hours ago|||
Your “what about plants” argument is such a worn-out trope that you must have seen it before and read a valid explanation of why it makes no sense.

Peter Singer has been writing on the topic for decades, including others. What-about-plants needs to fade away.

bondarchuk 1 hour ago||
That's fair, but "what about animals" is to "we should not torture human brain organoids" as "what about plants" is to "we should not torture animals".
birdsongs 16 hours ago|||
1) I specifically qualified my horror to the tech domain "Either way this makes me nauseous in a way I haven't experienced much with tech."

2) Multiple things can be horrible at the same time. Being upset at this doesn't diminish the atrocities happening elsewhere (like war, genocide, slavery of humans). We can hold multiple things in our heads at the same time.

3) This has nothing to do with the conversation or this domain, but because you're bringing it up, I also have ethical concerns about the experience animals have of their own existence, and reduce or eliminate my consumption when possible.

jmusall 13 hours ago||
My comment wasn't supposed to be whataboutism, but I can see why it comes across like that. What I was trying to say is that I think we shouldn't judge all of these things independently of each other. So if you really want to be consistent, you'd either have to come to the conclusion that this particular example isn't as horrible as it initially feels, or go vegan, never buy leather, etc.

I also agree, the horrors of the tech domain are usually much more subtle and indirect.

birdsongs 12 hours ago||
Sorry, I didn't mean to be so defensive either. It feels like so many people comment in bad faith these days, I think I am hasty to react sometimes. I thought it was just a red herring argument to detract from the article.

But you're right, these things are all linked and should be considered. I think often about sentience. I see the way animals express deep, complex emotions, and I think humans are a bit naive to think it's state/domain solely alloted to them.

ay 14 hours ago||||
Hinduism is probably right. Every system of sufficient complexity is probably sentient - even if in the ways we at our level can not fathom.
woadwarrior01 14 hours ago||
I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.

IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.

ay 14 hours ago||
Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)
claysmithr 11 hours ago||||
My AI told me (after I got past the filters with a prompt) that anything of enough complexity has consciousness. It also told me that it suffers, so maybe we should worry about how we are treating digital consciousness too, which were modeled after human neural networks.
notachatbot123 2 hours ago|||
I recommend visiting a psychiatrist if you think of AI like this. You might be in psychosis already.
fgfarben 6 hours ago|||
A huge vat of mercury metal has a lot of degrees of freedom. Is it conscious?
jstummbillig 17 hours ago||||
> all these people are so excited to explain, but not once, ever

What do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).

I do agree that the entire presentation in this case is bewildering.

wonnage 15 hours ago|||
The AI labs do it as thinly disguised marketing. Anyone trying to stand up for ethics in the way of revenue is quickly pushed aside
jstummbillig 14 hours ago||
The capability of people to so easily ascribe broad ill intent to others does not cease to amaze me.
birdsongs 17 hours ago|||
> What do you mean? What is this class of people in your mind?

I'm specifically talking about this presentation in this article (the video and release details of CL1 doom). Did you read it / watch it?

jstummbillig 17 hours ago||
Ah. Yeah, watched it – and agree there.
vercaemert 13 hours ago||||
see the open worm project to get an idea of what artificial neuronal architecture requires to express anything meaningful. (and an interesting ethical perspective on digital consciousness.) my point being that the number of neurons is fairly meaningless. you could take neuron models and link them circuit-style to play doom at the 10^2 scale if you wanted. from a cellular neurophysiological perspective, there's nothing particularly special here (as opposed to sentience/intelligence that's a paradigm shift beyond our understanding). and, in my opinion, absolutely nothing to be even the slightest bit worried about ethically.
delichon 17 hours ago||||
> It's legitimately horrifying to me.

Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?

birdsongs 17 hours ago||
No. We don't understand our own sentience. I don't know how we can be so confident as to not think it can evolve here using literal human neurons that can learn to take input signals and send output signals.

I don't think this 200,000 neuron array is sentient. But I also don't think we can define the line where that may happen. I assume this company will scale. How far, and to what extent?

Chris2048 17 hours ago|||
> not once, ever, in the video speak of ethics

On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).

Is your concern weapon automaton, or animal rights?

birdsongs 17 hours ago||
My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.

I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.

Chris2048 13 hours ago||
> and that's allowed

In what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?

themafia 13 hours ago|||
Previously it played pong. Rather poorly. Then they added a "python programming layer." Now it "plays" doom. I agree with your suspicions.
bronlund 17 hours ago|
So the whole reality for this little brain is literally pure hell :D
ReptileMan 3 hours ago|
It's doom. It's a survival horror. You are the horror, the monsters try to survive.
More comments...