Posted by JumpCrisscross 2 hours ago
But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.
Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.
IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.
I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.
The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).
Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].
[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.
Modernized. Industrial AI scale.
Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?
I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.
- we promise not to share PII (defined as narrowly as possible)
- we promise not to share payment information except with our payment system
- if you pay us, we promise not to train LLMs on your data
- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".
The average person doesn't care about online privacy.
"2028 – A Dystopian Story By Jack Ganssle":
http://www.ganssle.com/articles/2028adystopianstory.htm
Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.
From Cornel Law:
LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery
Rule 26. Duty to Disclose; General Provisions Governing Discovery
(a) Required Disclosures.
(1) Initial Disclosure.
(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:
(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;
(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …
Inaccuracy in meeting minutes?
Leaking private info, re security of notes?
I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.
> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.
By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.
I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.
[0] https://perkinscoie.com/insights/update/federal-court-rules-...
[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...
[2] https://natlawreview.com/article/when-ai-takes-notes-protect...
[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...
To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.
Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.
As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.
Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.