Top
Best
New

Posted by giuliomagnifico 14 hours ago

A new bill in New York would require disclaimers on AI-generated news content(www.niemanlab.org)
483 points | 201 commentspage 2
wateralien 13 hours ago|
They need to enforce this with very large fines.
delichon 11 hours ago||
> In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.

So clawdbot may become a legal risk in New York, even if it doesn't generate copy.

And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.

InsideOutSanta 11 hours ago|
This seems like common sense. I'm running OpenClaw with GLM-4.6V as an experiment. I'm allowing my friends to talk to it using WhatsApp.

Even though it has been instructed to maintain privacy between people who talk to it, it constantly divulges information from private chats, gets confused about who is talking to it, and so on.^ Of course, a stronger model would be less likely to screw up, but this is an intrinsic issue with LLMs that can't be fully solved.

Reporters absolutely should not run an instance of OpenClaw and provide it with information about sources.

^: Just to be clear, the people talking to it understand that they cannot divulge any actual private information to it.

nilslindemann 5 hours ago||
I support this for the same reason I want scripted reality TV shows to be labeled as such. Anything that claims to be reality but isn't should be clearly marked as such, unless it's obvious from the context.
ameliaquining 6 hours ago||
I'm not convinced that this law, if it passed, would survive a court challenge on First Amendment grounds. U.S. constitutional law generally doesn't look kindly on attempts to regulate journalism.
saghm 6 hours ago|
OTOH any challenge that makes it all the way to the Supreme Court right now has as much chance of being a decision that completely ignores constitutional law as it does being decided based on that.
rektlessness 11 hours ago||
Broad, ambiguous language like 'substantially composed by AI' will trigger overcompliance rendering disclosures meaningless, but maybe that was the plan.
nomercy400 12 hours ago||
You might as well place it next to the © 2026, on the bottom every page.
rasjani 13 hours ago||
Finnish public broadcasting company YLE has same rule. Even if they do cleanups of still images, they need to mark that article has AI generated content.
hellojesus 10 hours ago|
Do they find that fewer people read articles that were written by humans but have that label slapped on for the photo vs a baseline?

If not: I suspect fewer people may care and so what's the point of the label?

If so: why would they continue to use Ai solely to clean up photos?

talkingtab 7 hours ago||
I'm beginning to suspect HN also needs such a bill. Maybe it is not AI content, but so many prominent posts on HN feel like advertising. Perhaps that is the good thing about AI is that it decreases the trust level. Or is that really a good thing?

[Edit: spelling sigh]

chrisjj 10 hours ago||
Why limit this to news? Equally deserving of protection is e.g. opinion.
bluebxrry 10 hours ago|
How about instead of calling Claude a clanker again, which he can't control, how about we give everyone a fair shot this time with a bill that requires the news to not suck in the first place.
More comments...