Posted by mustaphah 7 days ago
To their credit, their privacy policy says they have agreements on how the upstream services can use that info[1]:
> As noted above, we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance).
But even assuming the upstream services actually respect the agreement, their own privacy policy implies that your prompts and the responses could still be leaked because they could technically be stored for up to 30 days, or for an unspecified amount of time in the case of the exceptions mentioned.
I mean, it's reasonable and a good start to move in the direction of better privacy, way better than nothing. Just have to keep those details in mind.
I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.
Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.
Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.
While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.
The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.
Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.
And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.
That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.
If the government is failing, explore writing civil software, providing people protected forms of communication or modern spaces where they can safely organize and learn, eventually the current generations die and a new, strongly connected culture has another chance to try and fix things.
This is why so many are balkanizing the internet age gating, they see the threat of the next few digitally-augmented generations.
> Use our service
Nah.
The ChatGPT translation on the right is a total nothingburger, it loses all feeling.
Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.
Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….
Like it or not it’s a mutually assured destruction arms race.
AI is the new nuclear bomb.
What bad thing exactly happens if China wins? What does winning even mean? They can't invade because nukes.
Can they manipulate elections? Yes, so we'll do the opposite of the great firewall and block them from the internet. Block their citizens from entering physically, too.
We should be doing this anyway, given China is known to force them to spy for them.
Perun has a very good explanation why defending against nukes is impossible to do economically compared to just having more nukes and mutually assured destruction: https://www.youtube.com/watch?v=CpFhNXecrb4
1) China will get ASI and use it to beat everyone else (militarily or economically). In my reply, I argue we shouldn't race China because even if ASI is achieved and China gets it first, there's nothing they can do quickly enough that we wouldn't be able to build ASI second or nuke them if we couldn't catch up and it became clear they went to become a global dictatorship.
2) China will get ASI, it'll go out of control and kill everyone. In that case, I argue even more that we shouldn't race China but instead deescalate and stop the race.
BTW even in the second case, it would be very hard for the ASI to kill everyone quickly enough, especially those on nuclear submarines. Computers are much more vulnerable to EMPs than humans so a (series of) nuclear explosion(s) like Starfish Prime could be used to destroy all of most of its computers and give humans a fighting chance.