Posted by shaman1 19 hours ago
There's generally protections in many jurisdictions against having to honor contracts that are based on obvious errors that should have been obvious to the other party however ("too good to be true"), and other protections against various kinds of fraud - which may also apply here, since this was clearly not done in good faith.
If you have an AI chatbot on your website, I highly recommend communicating to the user clearly that nothing it says constitutes an offer, contract, etc, whatever it may say after. As a company you could be in a legally binding contracts merely if someone could reasonably believe they entered into a contract with you. Claiming that it was a mistake or that your employee/chatbot messed up may not help. Do not bury the disclaimer in some fine-print either.
Or just remove the chatbot. Generally they mainly piss people off rather than being useful.
A disclaimer is, in my opinion, not enough.
Will the company go out of their way to do right by customers who were led to disadvantageous positions due to the chat bot?
Almost certainly not. So the disclaimer basically ends up becoming a one way get out of jail for free card, which is not what disclaimers are supposed to be.
Replacing the employee with a rental robot doesn’t change that: the business is expected to handle training and recover losses due to not following that training under their rental contract. If the robot can’t be trained and the manufacturer won’t indemnify the user for losses, then it’s simply not fit for purpose.
This is the fundamental problem blocking adoption of LLMs in many areas: they can’t reason and prompt injection is an unsolved problem. Until there are some theoretical breakthroughs, they’re unsafe to put into adversarial contexts where their output isn’t closely reviewed by a human who can be held accountable. Companies might be able to avoid paying damages in court if a chatbot is very clearly labeled as not not to be trusted, but that’s most of the market because companies want to lay off customer service reps. There’s very little demand for purely entertainment chatbots, especially since even there you have reputational risks if someone can get it to make a racist joke or something similarly offensive.
If that "difference" is so obvious to you (and you expect it will break at some point), why don't you demand the company to notice that problem as well? And simply.. not put bogus mechanism in place, at all.
Edit: to be clear. I think company should just cancel and apologize. And then take down that bot, or put better safeguards (good luck with that).
If you walk into any retail store in the US, the price on the shelf is legally binding. If you forgot to update the shelf tag, too bad, you are now obligated to sell at the old price.
If you advertise a price or discount, you are required to honor such. Advertising fictitious prices or discounts is an illegal scam.
Likewise, if you have some text generator on your site that gives out prices and promo codes, that's your problem. A customer insisting you honor that is not a scammer, they are exercising their legal right to demand you honor your own obligations to sell products at the price you advertised.
So, this is a scammy business trying to get out of their legal obligations to a customer who is completely in the right.
Lesson: don't put random text machines in your marketing pipeline in a way that they can write checks your ass can't cash.
Yeah, this should be properly communicated.