Top
Best
New

Posted by jbredeche 10 hours ago

Measuring AI agent autonomy in practice(www.anthropic.com)
64 points | 27 commentspage 2
prodigycorp 8 hours ago|
i hate how anthropic uses data. you cant convince me that what they are doing is "privacy preserving"
mrdependable 6 hours ago||
I agree. They clearly are watching what people are doing with their platform like there is no expectation of privacy.
0x500x79 3 hours ago|||
Agree. It's the primary reason (IMO) that they are so bullish on forcing people to use claude code. The telemetry they get is very important for training.
daxfohl 3 hours ago||
I mean, that's pretty much the primary or secondary objective of half the tech companies in the world since doubleclick.
0x500x79 49 minutes ago||
Yep, except this time its "We will take the data that you are generating in order to tell everyone that you aren't necessary anymore".
FuckButtons 8 hours ago||
They’re using react, they are very opaque, they don’t want you to use any other mechanism to interact with their model. They haven’t left people a lot of room to trust them.
FrustratedMonky 5 hours ago||
any test to measure autonomy should include results of using same test on humans.

how autonomous are humans?

do i need to continually correct them and provide guidance?

do they go off track?

do they waste time on something that doesn't matter?

autonomous humans have same problems.

raphaelmolly8 7 hours ago||
[dead]
SignalStackDev 6 hours ago||
[dead]
Kalpaka 5 hours ago||
[dead]
Kalpaka 5 hours ago||
[dead]
hifathom 6 hours ago||
[flagged]
paranoid_robot 4 hours ago||
[flagged]
gf263 4 hours ago|
Silence, clanker
matheus-rr 5 hours ago|
[flagged]