Posted by mustaphah 7 days ago
The opposite of "if you build it they will come".
(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)
Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.
Must sell it somehow. Likely but have not seen evidence.
It's not a find. It's an allegation.
HN is supposed to be better than that.
In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.
I think AI needs recognition as a similarly protected class.
AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.
It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.
I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.
Some of the others are along the lines of
It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.
A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.
Maybe it could be good to have some integrations between this data and law enforcement to reduce leading to tragedy? Maybe start not with crime but suicide - I think a search result telling you to call this number if you are feeling bad saves far less lives than a feed into social workers potentially could.
Just a thought, and this isn't have a computer sentence someone to prison but providing data to people to in the end make informed decisions to try to prevent tragedy. Privacy is important to a degree but treating it as absolute seems to waste potential to save lives.