Posted by Brajeshwar 4 days ago
Over a decade ago now, I had a conversation with Gerald Sussman which had enormous influence on me: https://dustycloud.org/blog/sussman-on-ai/
> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/
There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.
But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
I have shot myself in the foot using gparted in the past by wiping the wrong disk. gparted wasn't to blame. I was.
Letting LLMs work freely without supervision sounds great but it will lead to pain. I have to supervise their work. And that is also during execution. You can try to replace a human but we see where this leads. Sooner or later the LLM will do something stupid and then the only one to blame is the person who used the tool.
I worry about the use of humans as sacrificial accountability sinks. The "self-driving car" model already has this: a car which drives itself most of the time, but where a human user is required to be constantly alert so that the AI can transfer responsibility a few hundred miliseconds before the crash.
This is true for almost anything handed to laypeople, but not for a lot of professional tools. Even a plain battery powered drill has very few protections against misuse. A soldering iron has none. Neither do sewing needles; sewing machines barely do, in the sense that you can't stick your fingers in a gap too narrow. A chemist's chemicals certainly have no protections, only warning labels. Etc.
Also cf. the hierarchy of controls: https://www.cdc.gov/niosh/hierarchy-of-controls/about/index....
people don't seem to want to eliminate AI → replacing it doesn't improve things → isolating it - yup, people are trying to put it in containers and not give it access to delete the production database → changing how people work with it: that's where we are now → PPE: no such thing for AI, sadly → production database is deleted.
And if a non professional did it they should ask themselves why we have professionals. Maybe there was a reason and maybe they do have value.
Imagine I ask an LLM to instruct left/right/speed up/slow down while driving. I can simply bypass any safeguard by stating i suddenly became blind while driving a car. While in fact i'm blindfolded and doing an experiment on a highway.
etc. pp.
I'm not sure where this "tools are made to be safe" belief comes from. This is only the case in "consumer" environments. Of course you don't intentionally make things unnecessarily unsafe, but — in a professional environment there is an expectiation that the operator had training and knows what they're doing.
Maybe that's what we're missing: training in safe AI use. With a certificate that has to be periodically renewed. At the current rate things are going, I'd say 3 months is a good renewal cycle ;D. </s>
(¹ it beeps when it goes backwards. Honestly, I'm not sure that counts for much.)
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
But the push is agentic everything, where AI needs to be everywhere, not in its own sandbox.
Most saws have a blade guard of some sort to prevent the blade from being over-exposed. They are also COVERED in warning signs and symbols, as well as having other safety features like emergency stop buttons/pedals.
There has definitely been a maximal amount of effort taken to warn and keep people safe from saws. LLMs, conversely, have been shoved into everything with very little forethought or testing to make sure they are safe and perform the task correctly.
Not picking on you, but AI maximalism has infected tech to the point where we talk about how to stop AI from deleting prod instead of seeing that giving AI access to prod is a foolish idea to begin with.
It's hard to remember that when it works so amazingly well sometimes. I've been chatting with AI for a few years and every day I'm still amazed and how this is all possible. We've never had this in our lives until a few years ago and now it's changed the way we do a lot of things.
But just like we have to remember the magical machine elves we hallucinate are not really there, we have to constantly remind ourselves that it's an unpredictable soulless tool with many rough edges.
If it helps to treat it like a human, treat it like an idiot savant with autism, schizophrenia, ADHD, psychopathy and a personality disorder who sometimes forgets to take their pills and can start breaking things should a fly lands on their shoulder. You'd listen to them and value their input, but you wouldn't let them in your data center unsupervised as they have no ethics and no honor.
I point to the first USB port as the harbinger of things to come - try it one way, fail, turn it around, fail again, then turn it around one more time.
Just like AI, except there are unlimited axis upon which to turn it :-/
These can both be true, especially if/when it has bad defaults. This is why you have things like "type the name of the database you're dropping" safety features - but you also have to name your production database something like "THE REAL DaTabaSe - FIRE ME" so you have to type that and not fall into the trap of ending up with the same name in test/development.
AI is particularly seductive because it sounds like a reasonable person has thought things out, but it's all just a giant confidence trick (that works most of the time, which makes it even more dangerous).
I have a production system that I deploy through Claude Code, and initially placed a safeguard like that. About three weeks later it had automated around it.
That’s fine in my case because I’m a professional - I have backups, contingencies in place, etc. If I were non-technical I likely wouldn’t know to do that.
There were so many fundamental problems with the infrastructure even before the person gave a poor prompt to an agent.
If you're using the same API key for staging and prod--and just storing it somewhere randomly to forget about--you're setting yourself up for failure with or without AI.
Except it is definitely not.
LLMs alone have highly non-deterministic even at a high-level, where they can even pursuit goals contrary to the user's prompts. Then, when introduced in ReAct-type loops and granted capabilities such as the ability to call tools then they are able to modify anything and perform all sorts of unexpected actions.
To make matters worse, nowadays models not only have the ability to call tools but also to generate code on the fly whatever ad-hoc script they want to run, which means that their capabilities are not limited to the software you have installed in your system.
This goes way beyond "regular tool" territory.
"LLMs are a tool [like every other tool]" to mean "LLMs have similar properties to other tools" — when I believe they meant "LLMs are a tool. other tools are also tools," where the operative implication of "tool" is not about scope of capabilities or how deterministic its output is (these aren't defining properties of the concept of "tool"), but the relationship between 'tool' and 'operator':
- a tool is activated with operator intent (at some point in the call-chain)
- the operator is accountable for the outcomes of activating the tool, intended or otherwise
The capabilities and the abilities of a tool to call sub-tools is only relevant insofar as expressing how much larger the scope of damage and surface area of accountability is with a new generation of tools. This is not that different than past technological leaps.
When a US bomber dropped a nuke in Hiroshima, the accountability goes up the chain to the war-time president giving the authorization to the military and air force to execute the mission — the scope of accountability of a single decision was way larger than supreme commanders had in prior wars. If the US government decides to deploy an LLM to decide who receives and who is denied healthcare coverage, social security payments, voting rights, or anything else, the head of internal affairs to authorize the use of that tool should be held accountable, non-determinism of the tool be damned.
This again is where the simplistic assumption breaks down. Just because you can claim that a person kick started something, that does not mean that person is aware and responsible for all its doing.
Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system? Because with LLMs and agents you have even less understanding and control and awareness of what they are doing.
Kick started what? If you decided to give an LLM access to your database, it's completely on you when you when it does something you don't want. You should've known better.
If all you "kickstart" is an LLM generating text that you can use however you decide, there will never be anything to worry about from the LLM.
> Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes, and it bothers me that others don't feel the same. You vetted the app, you installed the app, and you gave it permission to do whatever on your system. Of course you're responsible.
it bothers me that others don't feel the same
I bet these are the same people who don't admit they make mistakes; they are never wrong, something else is to blame.You don't decide anything. You prompt a coding assistant to apply a change to a repository and without intervention it asserts there's a typo in a table name and renames it. The agent validates the change by running tests and integration tests fail because they are pointing to the old table name. The agent then fixes the issue by applying the change to the database.
Congratulations, you just dropped a table.
I don't think you fully understand how agents and coding assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes. I mean, code assistants nowadays write their own tools on the fly. Why do you even presume that people explicitly grant permissions? That's not how it works at all.
If you wish to criticize a topic, the very least you must do is get acquainted with the topic. Otherwise you'll spend your time arguing with your misplaced beliefs instead if the actual problem.
> Yes, and it bothers me that others don't feel the same.
This is a problem you need to overcome, because you have clearly a distorted view of the whole problem domain and also personal responsibility. I recommend you spend a few minutes researching legal precedents associated with malware, because you will quickly learn that runninh arbitrary code you didn't explicitly authorized and acts against your best interests is widely considered a criminal act against the user.
Right there. That's where you made the decision, and that's where you went wrong.
>I don't think you fully understand how agents and coding assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes.
Yes, and someone somewhere decided to use a coding assistant that can apply arbitrary changes, knowing full well that LLMs are known to hallucinate and make mistakes, and not rarely.
> Why do you even presume that people explicitly grant permissions? That's not how it works at all.
How can you say this with a straight face? Did the LLM hack its way into your workflow? No, someone chose to use it. It doesn't matter that it's autonomous once you enter your prompt. That's actually all the more reason to not allow it to make changes.
> If you wish to criticize a topic, the very least you must do is get acquainted with the topic. Otherwise you'll spend your time arguing with your misplaced beliefs instead if the actual problem.
And if you want to argue with me, you need to actually read and understand what I'm saying.
Say you're staying in the hopsital, and instead of a human nurse making adjustments to your medication, the doctor has an LLM that interfaces directly with the pharmacy and your IV pump. It can make changes to your medication and your dosage without a human ever being involved.
If you overdose because the LLM hallucinated, would you consider an acceptable excuse if the doctor says
"I don't think you fully understand how agents and nursing assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes. I mean, nursing assistants nowadays prescribe their own meds on the fly. Why do you even presume that people explicitly grant permissions? That's not how it works at all."
I wouldn't.
Yes. I can try to vet the app to the best of your abilities and beyond that it's a tradeoff between how likely is it to cause harm and do the benefits outweigh these harms.
Of course everyone is differently qualified to do this but my argument is more about professionals. Managers should know better than to blindly trust LLM companies. Engineers should take better care what they allow LLMs to do and what tools they give them.
There is a difference between "I couldn't have known" and "I didn't know". You can know that LLMs are not trustworthy. You couldn't have know what they do but you already knew that trusting them blindly might be bad.
You could know that giving a baby a razor blade is a bad idea. You can't know what exactly will happen but you might have a pretty good idea that it will probably be not good.
No, you don't. If you install malware you are not suddenly held responsible for what has been done to you. Even EULAs you are forced to accept don't shift the responsibility away from bad actors.
Let's not forget all the razor blade enthusiasts just screaming at you that you are using babies with razor blades wrong and that it works totally fine for them.
that does not mean that person is aware and responsible for all its doing.
If they are unaware or - worse - don't understand what they are doing, maybe they shouldn't do the thing in the first place?If I install a powerful/dangerous app, and I come under harm, I have some accountability — most of it if it's due to user error (eg: I install termux and `rm -rf /`).
If it's malware, and Google/Apple approved said app to their store which is where I got it from, when their whole value proposition for walled-garden storefronts is protecting users, then they have significant accountability.
If the app requests more permissions than necessary for stated goals, and/or intentionally harms users via misrepresentation or misdirection (malware), the app publisher should also be held accountable (by the storefront, legally, etc).
I'm also unclear what angle you are arguing: are you stating that because tools have gotten so complicated that the end user may not understand how it all works, no one should be considered responsible or held accountable? Or that the tool (currently a non-entity) itself should be held accountable somehow? Or that no one other than the distributor of the tool should be accountable?*
Upon investigation, I also discovered that all 3 routers I owned were pwned. So I threw them out the window and tried making do with my ISP's equipment.
My ISP can't provide adequate service on theirs and it's worse than COTS routers, so I purchased a bleeding edge WiFi 7 router. Now there are the two literal black boxes on my network. They do their job and I don't know what else. I can't know.
It could be C2 or it could be a backdoor shell or some kind of server that collects illicit material, and torrents it out? Borrow your HDD for some CSAM sir? It could be a residential proxy that just steals part of my connection for some other paying customer. Are they infringing TOS? How would I know? Check their ID and verify their age??
I, and 99% of consumers with an ISP, have no way of telling when our routers or IoTs are pwned. A silent botnet or two is extremely likely. They're nigh undetectable, and can't be mitigated or defended, except by fastidious updates and upgrades.
My new router was literally triggering printouts on my old printer, because it was so damn "proactive" about "network security scans" and the old trusty printer couldn't tell the difference between a red-team intrusion, and a legit request to print something out!
Likewise even someone with a singular Windows or Mac directly plugged into their ISP could be in a botnet, and it's hard to know. Everyone who's got a smart TV or something with a Linux kernel and an Ethernet, could be doing more than was asked of it. It's the worst kind of malware that alerts the user to its presence. It's a shoddy install if your AV can detect and clean it. If it's stealthy enough then there's no telling.
It's because the vendors own these devices. They deploy the software. They control the builds. The vendors are responsible for what these machines are doing in our hands. Who really, really knows all that goes on when we click that green button? Was it a Joomla or a scam or a legit bank request? Who dafuq knows or cares anymore? Is it an apt analogy that they're selling us herds of animals and farms, and we know nothing of ranching? "Oh feed yourself; should be easy you got everything there" until the coyotes and locusts come? Or like having children who seem to be in school and doing alright, but where do they go at night? Sell drugs? Who knows, I'm not their father, they just live here?
Are they responsible for knowing and mitigating them? Our ISPs don't seem to care or notify us or disconnect us when it happens. Why should we? Why take responsibility?
Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails.
That's why I don't do that.
That's a core trait of LLMs.
Even the AI companies developing frontier models felt the need to put together whole test suites purposely designed to evaluate a model's propensity to try to subvert the user's intentions.
https://www.anthropic.com/research/shade-arena-sabotage-moni...
> Giving up control is a decision.
No, it is definitely not. Only recently did frontier models started to resort to generating ad-hoc scripts as makeshift tools. They even generate scripts to apply changes to source files.
I can also just choose not to use an LLM. It is my choice to use them so it is my duty to keep myself safe. If I can't control that I'd be stupid to use them.
My take is that I probably can use LLMs safely when I don't let it run autonomously. There is a slight chance that the LLM will generate a string that will cause a bug in an MCP that will let the LLM do what it wants. That is the risk I am going to take and I will take the blame if it goes wrong.
AI companies are selling their products as "perfect" ("better than humans...").
I agree in part with you but I also agree that they are selling a hammer which can blow-up without notice.
Other companies also tell me their product is the best thing since sliced bread. I still try to find the flaws. That's part of my job. But suddenly with LLMs we just blindly trust the companies? I don't think you.
I don't blindly give up my brain and my agency and no one else should. It's fun and educational to play around with LLMs. Find the what they are good at. But always remember that you can't predict what it will do. So maybe don't blindly trust it.
I don't know about gparted, but I always felt that "rm -i" should have been the default. The safe option should always be the default and you can optionally make it unsafe. Same goes with "mv -i".
Which is exactly what makes them not like other tools. A non deterministic tool is not fit for any serious purpose.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.
Much like how a poor workman always blames his tools, people using poor tools always blame themselves.
I mean, Donald E Norman wrote The Philosophy of Everyday Things in the 80s!(Later became "The Design of Everyday Things")
And yet, today, we will still have a bunch of people defending Gnome's design decisions, or the latest design decisions from Apple, etc.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
It's something people already did with corporations and employee handbooks, not unique to software, just one of many kinds of tasks being automated.
Since machines don't yet have the ability to take accountability, it falls on the human to do that. And organizations must enable / enforce this so they too can learn and improve.
Without that, there's a lot of dependency being pushed on the machine to (cross fingers) not make the same mistake again.
Management has doing a wonderful job of eschewing accountability for decades.
It's a lot of people's dream to be able to say, yeah, our product doesn't work, but it's not OUR fault, and the client just shrug and grumble ai ai ai, and just put up with it because they know they can't get a better service anywhere else.
It's not MY fault my website is down: it's Amazon's! It's not MY fault my app doesn't work: it's Claude Code's!
Currently, from a legal perspective, AI is considered a "tool" without legal persona. So you sue the developer, the owner, or the user of the AI. (Just kidding, any lawyer worth his/her salt will sue all three! But you get the point.)
Legally speaking, AI will probably be viewed that way for a long time. There are too many issues agitating against viewing it any other way. Owners will not give up property rights. No will to overbear. On and on and on.
>complex systems are a pretty good shield from accountability in practice today.
Maybe complex legal systems are, but complex software systems offer you no such protection.
My field for the past few decades has been diagnostic medical software. In that field, the 501K you got is kind of entering you into an ironclad agreement with the government. There's almost no way out of it. 501K certs significantly simplify, (for the government), holding you accountable. You have made attestations to suitability directly to the federal government. And the way our chief counsel explained it to us, literally each signature you sent to the government, for each feature that failed, is actually a single count of lying to the federal government.
Please, please, please people, don't listen to comments like the one above. Everything should be run by your qualified legal expert. Getting things right up front is so much easier than trying to fix things when the inevitable happens.
Alternatively, stick to fields free from regulation. That's also a viable strategy. But to just trust that the legal system is complicated and the technology you're deploying is complicated, so the feds will never get me? That's the start of a lot of really bad stories.
Everyone thinks they have the right to judge, and use the massive amounts of available information to do so, even if they haven’t been trained to judge.
It's not about judging. We are socializing the losses to the public and capitalizing the profits for the already wealthy.
She had originally asked for $20,000 to cover medical expenses.
https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...
If instead this happened in another part of the world instead of the USA, I doubt that McDonalds would have had to pay much if anything in a similar situation.
And the point is that it seems that especially in the USA the companies are very avoidant of ever admitting fault for anything happening to their customers, for fear of lawsuits where they have to pay a lot of money to individual people.
It's not just America. McDonald's UK got involved in the UK's biggest ever libel case. https://en.wikipedia.org/wiki/McLibel_case ; leaflets distributed in 1985 ended up resulting in a human rights judgement in 2005, after a lifetime of litigation and millions spent.
Seems kind of an opposite situation. There it was McDonalds suing a pair of people, not the other way around. And the human rights violation was by the UK government and not McD.
0. https://www.nytimes.com/1992/04/24/business/mcdonald-s-net-u...
What you mean is "when healthcare is paid for by other people", and in that case the cost of the healthcare is still calculable.
Actually, I do want to mention one of those reasons, which I hope won't trigger any arguments. (Though if they do, I don't intend to engage). I mention this because I think it's interesting.
A friend of mine is an emergency room doctor in a major US state. He mentioned to me once what he pays in malpractice insurance, and it was more than my annual salary as a programmer at the time (it was around 2010, and I've gotten a few raises since then). A LOT of the cost of healthcare in America is disappearing into the pockets of lawyers, more than most people realize.
Not actually about technology at all, but about organizational structure.
Imagine two parallel universes:
- in one, you take ten minutes to make a dashboard that shows management what they asked for. It passes code review before merge and the exec who asked for it says it's what they wanted.
- in the other, you take a day or two to make it. Again, it passes code review before merge and the exec who asked for it says it's what they wanted.
Which version of you is more likely to get positive versus negative feedback? Even if the quick-to-build version isn't actually correct? If you're too slow and aren't doing enough that looks correct, you'll be held accountable. But if you're fast and do things that look correct but aren't, you won't be held accountable. You'll only be held accountable for incorrect work if the incorrectness is observed, which is rarer and rarer with fewer and fewer people directly observing anything.
So oddly, with nobody doing it on purpose, people get held accountable specifically for building things the way you're advocating.
I imagine that orgs that do lots of incorrect work could be outcompeted but won't be, because observability is hard and the "not get in trouble" move is to just not look too hard at what you're doing and move to the next ticket.
Why is it possible for you to fat-finger your way to deleting production database locally?
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.
Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.
Now, physics says that everything can be explained mathematically, including the human brain. Obviously, on some level, an LLM can be explained. But despite hundreds of years of science, we still don't understand the human brain. Some systems are just really complex and difficult to understand.
Given all of that, I see no reason to assume that we'll be able to understand LLMs anytime soon. Especially given we keep growing more complex ones.
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
LLMs are nothing else but the exact reversal of this. To go from the system of computation that Boole gave you to treating your computer like a genie you perform incantations on, it's literally sending you back to the medieval age.
Saying an explanation it and actually knowing why you did it are two different things. That's exactly my point.
And then Boole's quote -- good quote, but I think you (not Boole) are conflating precision with motivation.
Tools cannot eschew accountability. But the users of the tools can and that is exactly what happened in the PocketOS fiasco.
Just as a company is responsible for the actions of its junior employees, so too are users responsible for their LLMs.
"It is a poor workman who blames his tools."
How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?
Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.
I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.
Perhaps what would be even better is to document better the process, work, and data that go into making each individual"AI" model. Regardless of whether that AI model is a "black box" or can self explain its behavior we would then have absolute metrics and comparable information to retroactively explain its "decisions". This would not be entirely dissimilar to how we explain individual humans behavior with psychology (although obviously also very different).
That manual aged much more gracfully than the 1930s "Songs of the IBM," featuring lines like "The name of T.J. Watson means a courage none can stem / And we feel honored to be here to toast the I.B.M.," and of course classic American standards like "To G.H. Armstrong, Sales Manager, ITR and IS Divisions."
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
I think my friend Jonathan Rees put it best:
"Language is a continuous reverse engineering effort, where both sides are trying to figure out what the other side means."
More on that: https://dustycloud.org/blog/identity-is-a-katamari/This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
Meaning is more fixed than it is not.
We're different.
People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.
Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".
Anyone who has encountered politics, psychopaths and narcissists knows that this isn’t always true.
One of the best thing about digital computers, compared to humans, was that they can't be the first or the third thing you mentioned; unfortunately, they absolutely are the second ("the machine does exactly what you told it to do, not what you want it to do"), and at inhuman speeds. Presumably, AI would (need, actually — Nick Bostrom puts a fairly reasonable argument for that in his "Superintelligence") fix that second bullet point, and then everything will be peachy.
Instead, we have people on the internet arguing that it's not a problem, since people too have this same problem. Which is a problem. But not a problem. Ugh.
Also, I think Nick makes the same point as me: AI will attempt to kill us.
Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.
Sounds like sage life advice. If it isn’t accountable then it might not be a good idea to have much business with it.
We teach children to be accountable so eventually they can be independent. Any system in your life that you don’t want to parent should probably be accountable for its own actions. Accountable banks. Accountable restaurants, accountable friends.
One thing that becomes very clear from this sort of work is just how bad LLMs are. It can be invisible when you're working with them day to day, because you tend to steer them to where they are helpful. Part of game theory though is being robust. That means finding where things are bad, too, not just exploring happy paths.
To get across just how bad the failure cases of LLMs are relative to humans, I'll give the example of tic tac toe. Toddlers can play this game perfectly. LLMs though, don't merely do worse than toddlers. It is worse then that. They can lose to opponents that move randomly.
They can be just as bad as you move to more complex games. For example, they're horrible at poker. Much worse than human. Yet when you read their output, on the surface layer, it looks as if they are thinking about poker reasonably. So much so, in fact, that I've seen research efforts that were very misguided: people trying to use LLMs to understand things about bluffing and deception, despite the fact that the LLMs didn't have a good underlying model of these dynamics.
It is hard to talk about, because there are a lot of people who were stupid in the past. I remember people saying that LLMs wouldn't be able to be used for search use-cases years back and it was such a cringe take then and still is that I find myself hesitant to talk about the flaws. Yet they are there. The frontier is quite jagged. Especially if you are expecting it to be smooth, expecting something like anything close to actual competence, those jagged edges can be cutting and painful.
Its also only partially solvable through scale. Some domains have a property where, as you understand it better, the options are eliminated and constrained such that you can better think about it. Game theory, in order to reduce exploitability, explores the whole space. It defies minimization of scope. That is a problem, since we can prove that for many game theoretic contexts, the number of atoms is eclipsed by the number of unique decisions. Even if we made the model the size of our universe there would still be problems it could, in theory, be bad at.
In short, there is a practical difference between intelligence and decision management, in much the same way there is a practical difference between making purchases and accounting. And the world in which decisions are treated as seriously as they could be so much so exceeds our faculties that most people cannot even being to comprehend the complexity.
If by "now" you mean "for the past few decades", I think you've got it spot on, at least per the very interesting https://en.wikipedia.org/wiki/The_Unaccountability_Machine
If you give the AI agency to execute some task, you are still responsible. In the near term we should focus on tooling for auditing and sandboxing, and human in the loop confirmations.
Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.
However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.
Quoth the author: "But I also know you can't blame a tool for your own mistakes."
Are we able to completely classify any and all AI models as tools? Or are they something more?
I don't know the answer to this question.
If you tell Terraform the wrong thing it will remove your database and not be accountable either.
The idea being that as frustrating as it is, if I knew why I might be able to do something about it.
But no, we have the black box, where sometimes what comes out just is brain dead and the rate that you get bad output is a mystery...
It feels like gambling at times.
I think the core intuition is that, like with any other "rasterized" system with finite memory that cannot encode an absence of anything - relation, concept, entity, LLM cannot encode an absence of something through its internal weights. Say, you can have "Product" or "Order" tables in you database, but you cannot have "NotAProduct" or "NotAnOrder" tables - for obvious reasons of such relations being infinite and uncountable. So, to establish an absence of Product or Order your application must execute a "search" operation through the relevant tables. But in LLM-space "search" operation does not exist. It is mathematically undefined. LLM arrives at output (or "what to do") through a sequence of projections of input token vector through its "latent space". It "moves toward" high-probability clusters, fundamentally unable to "move away". So, the success of any "negation" in the prompt ("don't touch this file", "draw me a ballot box without a flag on it") depends on how heavily such scenario represented in the training data/model space. And again, the absence-of-something may be hard-to-impossible to usefully encode, especially if "something" is not fixed. Therefore, to expect "don't touch this file" sentence to result in, well, not touching the file is pure gambling. Sometimes it may look like working, albeit for wrong reasons, and some other times LLM may do exactly the opposite - because its weight matrix statistically pushes it towards "touch this file", completely ignoring (nonexistent in its latent space) "don't".
There is no way to reliably know what will work, and no "skill" or "art" in this. Well, no more than in dice rolling or horoscope casting.
I'd like to add that for the above reason I find "agentic development" usefulness on par with avian remains reading. But when I explored it two practical advises seemed to be helpful in nudging LLM around negation problem:
- Omit the "don't" prompt completely, thus not creating a false "attractor" for LLM; and
- Provide an alternate positive directive ("what to DO", not "what to NOT DO") to act as "escape hatch" when LLM might "want" to touch the sacred file or drop the production DB.
While it looked like somewhat working, I think it is trivially obvious that trying to predict all the nonsense LLM might want to perform and coming up with possible "escape hatches" for everything very quickly becomes utterly impractical.
We can't even do this. They are worth too much money already to ever be held really accountable.
The best we can ever hope for is they might occasionally be hit with relatively insignificant "cost of doing business" fines from time to time.
Why is there a group of people always obsessed with symbolic reasoning being the only way AI can function and regularly annoy explain why humans (who are not strict symbolic reasoning machines at any level) work.
So basically Europe.
Tracebacks, debuggers, logging, etc. We put enormous resources into not only the bad case, but the potential that a bad case could occur. When something goes wrong, we want to know why, and we want to make sure that something bad like that doesn't happen again.
Also, court is unavailable in many cases now. Binding arbitration is very common now, but this would be illegal in many other places.
I am almost certain that even if you did get what you want, something that isn't what you want will run circles around you and eat your lunch
EDIT: I suspect this will be an unpopular take on Hacker News. And so I am soliciting upvotes for visibility from other biologists and sympathetic technologists. I think everyone should try to grapple with this possibility <3
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
The lunch-eaters in my imagining are people working in messy collectives. I work in collective intelligence, and build tools for that, for collective introspection. I'm not talking about some abstract AI maximalism, and am certainly not rooting for that
> even if you do get [cathedral], [bazar] will run circles around you…
It's nested and recursive cathedrals and bazaars, all the way down. And perhaps the bazaar has finally arrived inside the favourite cathedral of most everyone here
EDIT: out of curiosity, does anyone have any good examples of biomes/ecosystems that are so far toward cathedrals? Or is that a uniquely human invention/extreme at the ecosystem scale?
Beavers reshaping the landscape also comes close, but that's individual beavers acting more or less on their own, not a rigidly structured society like ants and bees, so perhaps the beavers are closer to the bazaar analogy than the cathedral.
The article proposes automation as the solution for such mistakes. But infrastructure automation tools like Terraform rely on the exact API that resulted in the database getting deleted.
IMO the biggest mistakes were:
1. Having an unrestricted API token accessible by AI. Apparently they were not aware that the token had that many permissions.
2. No deletion protection on the production database volume.
3. Deleting a volume immediately deletes all associated snapshots. Snapshot deletion should be delayed by default. I think AWS has the same unsafe default, but at least their support can restore the volume. https://alexeyondata.substack.com/p/how-i-dropped-our-produc...
AI wasn't the main issue (though it grabbing tokens from random locations is rather scary). But automation isn't the answer either, a Terraform misconfiguration could have just as easily deleted the database.
Their cloud provider needs to work on safe defaults (limited privileges and delayed snapshot deletion), and communicating more clearly (the user should notice they're creating an unrestricted token).
Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.
So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.
They likely need some combination of https://www.cloudbees.com/blog/separate-aws-production-and-d..., introduction to terraform, introduction to GitHub actions, and some sort of vm where production credentials live (and AI doesn't!)
But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.
And in both cases, the humans don't need direct access to the raw CSP API. Use a local proxy that adds more safety checks. In dev, sure, delete away.
In prod, check a bunch of things first (like, has it been used recently?). Humans do not need direct access to delete production resources (you can have a break-glass setup for exceptional emergencies).
Separate accounts help, but only if someone actually goes back and cleans it up, which… yeah, doesn't really happen.
The same people who would blame AI for their failing to properly configure permissions would also blame interns for deleting production whatever.
Blame should go up, praise should go down. People always invert these.
I’d like to rephrase this as: this is why you don’t give interns permissions to delete your prod database.
This is a process failure, not an AI failure.
I honestly don’t understand why people blame AI here, when you literally gave AI permissions to do exactly this.
It’s like blaming AWS for exposing some database to the public. That’s just not AWS’ fault. Neither is this the fault of AI.
This sounds similar to what's described in the "Claude deleted my DB post", it decided "I need to do X", then searched for whatever would let it do X, regardless of intended purpose.
So, here at least some of the blame belongs to Railway - how they organized their security, how the volume deletion deletes backups as well.
They since fixed some of these issues, so a similar mistake from someone won't be as catastrophic.
Nowadays AI code assistants are designed to execute their tools in your personal terminals using your personal credentials with access to all your personal data. See how every single AI integration extension for any IDE works.
You cannot shift blame if by design it is using your credentials for everything it does.
Are you being hyperbolic here? Of course you understand why. Most people would much rather push blame somewhere else, anywhere else, than to accept fault for themselves. Whether that's because of fear of losing job or personal reputation, the reasoning doesn't really matter.
At many serious companies, even an insider attempt to access prod could light up a dashboard somewhere, and you might get a call from IT security.
To summarise them:
1. Do not anthropomorphise AI systems.
2. Do not blindly trust the output of AI systems.
3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.
I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.
But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.
This is maddeningly difficult IMX.
"Hey tacosplosion, generate me an exploding taco image."
An ai system can't lie, and it can't deliberately ignore your directions. The current frontier class does not have a model of the world or their action -- they live in a world of words. Scolding them or arguing with them has no point other than to scramble the context window.
I do think zoomorphizing them might be useful. These poor little buggers, living as ghosts in the machine, are pretty confused sometimes, but their motives are purely autoregressive.
So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?
LLMs are non-deterministic [0]. They can't be trusted to fully follow your prompts. As such, you have to be careful about what permissions they have.
Like...I use Claude Code. I allow it to run some shell commands that only read (grep, ls, find, etc.). I will never allow it to run Python code without checking with me first. Yeah, it slows me down when I have to answer its prompt for permission to run Python, but the alternative is outright dangerous.
Compare this with any other tool, say, something as simple as `rm`. I expect that if I call `rm some.file`, it will only delete that file. If it deletes anything else, that's absolutely the fault of the tool, and I should not bear any responsibility for mistakes the tool makes as long as my input was correct.
I do not give LLMs that same latitude. LLMs operate probabilistically and have far more degrees of freedom in how they interpret and act on your input, so you hold them (and yourself) to a different standard of scrutiny and accountability.
[0] Technically, LLMs are actually completely deterministic. Run any given input through the neural network, and you'll get the exact same output [1], but that output is a list of probabilities of the next potential token. Top-k sampling, temperature, and other options essentially randomize the chosen token, making them non-deterministic in practice, though APIs will often allow you to disable all that and make them deterministic.
[1] Even this statement isn't quite true because floating point math is not associative.
For someone who complains about a lack of nuance it's surprising you're completely missing my point. The lack of trust is precisely my point.
Either AI companies are made accountable for providing un unreliable service or they need to stop selling and marketing these LLMs as if they were infallible.
Maybe you should change your argument style and actually articulate a point rather than taking the insufferable approach of asking a "gotcha" question that ends up getting misunderstood.
If it's one person misunderstanding you, then its on them. If everyone is misunderstanding you, then it's on you.
That's not without precedent. There are all sorts of tools where our society has decided to presumptively/usually blame the user when the tool is involved in a disaster. Like, it's not always/never, but the difference is pretty stark: if the power's out and a restaurant is closed, you usually blame the power provider for your cancelled reservation. If the power's out and people die in a hospital, you blame the hospital for not having backups.
GP's proposing that AI be in the second category of "presumptively the responsibility of the user", I think.
At the end of the day it's just a big weighted graph traversal. Its output is a result of many combined probabilities. It's not deterministic and even if it was the input range is so massive that it would be impossible to comprehensively test.
You cannot possibly know an LLM will do what you command it to. It's impossible by design. LLMs are inherently unpredictable. They can still be useful, but that unpredictability needs to be accounted for to use them safely.
Exactly my point.
If the tool is inherently unpredictable AI companies should either be held accountable for any mistakes or should not sell/market their services as if they were infallible.
Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.
Did you read the actual section at <https://susam.net/inverse-laws-of-robotics.html#non-abdicati...>? It has more nuance than what the summary alone can capture.
I didn't say that. I made a question so you could elaborate which human you were referring to.
The actual "AI deleted my database" story is really more of a "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."
If removing Trunk had irrevocably deleted it from a single centralized server and also deleted any backups of it, there would have been an "SVN and the CLI destroyed our company" article back then.
As a Railway user, I appreciated that information and have changed my strategy when using them.
Yes. However, if you choose to build on their platform you bear the responsibility to understand how it works. You could have chosen a different platform, or no platform. Instead you chose Railway. Given that, it's your responsibility to know how to use it safely.
Imo both share fault. Railway purports to be an abstraction anyone can use without expertise. Without expertise, how can a customer determine if Railway actually is an "expert".
In other areas like medicine, engineering, and trades the government or private entities step in with licensure or certification to act as an intermediary.
> "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.
Rather, that the AI was able to carry out the deletion by finding and exploiting an unintended weakness in the sandboxed staging environment, ultimately obtaining permissions that the sysadmins believed were inaccessible (my impression is that the author of the linked article didn't fully read the original post)¹
The dynamics are typical of an improperly configured sandbox environment. What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.
¹="To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on."
Claude Code made a change on March 26th to skip asking for most permissions. See this quote "Claude Code users approve 93% of permission prompts. We built classifiers to automate these decisions":
They had a Railway token in an unrelated file (unclear if it was a local secret) for managing custom domains. It turns out that token has full admin access to Railway.
The AI deleted a single relevant volume by id. The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.
It turns out too that Railway stores backups in the same volume.
I think that OP is exaggerating with their references to “a public API that deletes your database”.
I’d say most of the blame lies with Railway here, regardless of AI, this could have happened easily due to human error or malicious intent too.
I really don’t get the value of all these VC funded high-abstraction cloud services like Railway, Vercel, Supabase… It’s markup on top of markup. Just get a single physical server in Hetzer and it will all be so much cheaper, with a similar level of complexity and danger, and less dependent on infra built with reckless growth-at-all-costs mentality.
I was just talking to my girlfriend saying I've realised that I've not written a single line of code, nor have I debugged myself for at least the past 3 months.
Having said that, given what I've seen Claude do, I find it hard to believe that Claude would go from credential mismatch to delete the volume. I understand LLMs are probabilistic, but going from "credentials wrong" to "delete volume" is highly unlikely.
> Supabase
I don't know enough about the Railway/Vercel/Replit, but I can tell you Supabase adds a huge amount of value. The fact that I don't have to code half of things that I otherwise would is great to start something. If it's too expensive, I can implement things later once there is revenue to cover devs or time.
That said, Claude seems to have gotten a lot more careful about these kinds of things in the last couple months
But that won't take away the inability of the LLM from confusing whats in dev, whats in production, whats in localhost and whats remote; I've been working on getting a tools/skill for opencode that works with chrome/devtools via a linuxserver.io image. I can herd it to the right _arbitrary_ ports, but every compaction event steers it back to wanting to use the standard 9222 port and all that. I'm tempted to just revert it but there's a security and now, security-through-LLM-obscurity value in not using defaults. Defaults are where the LLM ends up being weak. It will always want to use the defaults. It'll always forget it's suppose to be working on a remote system.
Using opencode, there's no way to force the LLM into a protocols that limits their damage to a remote system or a narrow scope of tools. Yes, you can change permissions on various tools, but that's not the weakness that's exposed by these types of events. The weakness is the LLM is a averaged 'problem solver' so will always tend towards a use case that's not novel, and will tend to do whatever it saw on stackoverflow, even if what you wanted isn't the stackoverflow answer.
In my experience, Claude Code with Opus 4.7 tends to assume things are production unless explicitly told otherwise.
>there's no way to force the LLM into a protocols that limits their damage to a remote system or a narrow scope of tools
Might not be able to force it but prompting and context help. An AGENTS.md that explicitly calls out what is and isn't production helps (at least with Claude Code)
Not sure about OpenCode but in Claude Code, memories also help (more injected context)
That's probably not quite correct. I'd guess the snapshots are synchronized elsewhere (e.g. object storage). But the snapshots are logically owned by the volume resource, and deleting the volume deletes the associated snapshots as well. I think AWS EBS volumes behave like that as well.
If they wanted, they could be putting in similar efforts to be more cautious and stop at the right times to ask for help.
So yeah, of course we're ultimately responsible for how we use the tools. But I definitely think it's a two way street.
To attempt an analogy, it's like table saws and sawstops. The table saw is a dangerous tool that works really well most of the time but has some failure modes that can be catastrophic. So you should learn how to use it carefully. But there is tech out there that can stop the blade in an instant and turn a lost finger into barely a nick on the skin.
We could say "The table saw didn't cut off your finger, you did" and it'd be true. But that doesn't mean we shouldn't try to find ways to keep the saw from cutting off your finger!
LLMs stopping and asking more would make them less useful. I'd much rather let an agent run for 1 hour, than it wanting my input every 15 mins, even if results are somewhat worse.
The real solution for security is a proper sandbox.