Posted by giuliomagnifico 14 hours ago
So clawdbot may become a legal risk in New York, even if it doesn't generate copy.
And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.
Even though it has been instructed to maintain privacy between people who talk to it, it constantly divulges information from private chats, gets confused about who is talking to it, and so on.^ Of course, a stronger model would be less likely to screw up, but this is an intrinsic issue with LLMs that can't be fully solved.
Reporters absolutely should not run an instance of OpenClaw and provide it with information about sources.
^: Just to be clear, the people talking to it understand that they cannot divulge any actual private information to it.
If not: I suspect fewer people may care and so what's the point of the label?
If so: why would they continue to use Ai solely to clean up photos?
[Edit: spelling sigh]