Posted by scottshambaugh 1 day ago
Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.
It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence
- [1] https://en.wikipedia.org/wiki/Liars_and_Outliers
On another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.
- societal norm/moral pressure shouldn't apply (adversarial actor)
- reputational pressure has an interesting angle to it if you think of it as trust scoring in descentralized or centralised networks.
- institutional pressure can't work if you can't tie back to the root (it may be unfeasible to do so or the costs may outweight the benefits)
- Security doesn't quite work the way we think about it nowadays because this is not an "undesired access of a computer system" but a subjectively bad use of rapid opinion generation.
not a good idea
AI researchers are sounding the alarm on their way out the door - https://edition.cnn.com/2026/02/11/business/openai-anthropic...
That will not remain true for infinity.
What happens when the AI PRs aren't slop?
We can just generate anything we want directly into machine code without any libraries.
...and if they commit libel we can "put them in jail" since they cannot be considered intelligent but somehow not responsible.
https://docs.github.com/en/site-policy/github-terms/github-t...
In all seriousness though, this represents a bigger issue: Can autonomous agents enter into legal contracts? By signing up for a GitHub account you agreed to the terms of service - a legal contract. Can an agent do that?
in either case, this is a human initiated event and it's pretty lame