Monday, July 28, 2025

What You Share With ChatGPT Can Be Used Against You

   Forget the risk of AI going rogue, here's the real risk of AI: The privacy of what you share with it.

  The AI are supposed to work for you, but what about the confidentiality of the information that a AI will learn from you? How can you be certain that it will not be leaked to a competitor? That AI will not somehow remember for other purposes or worse that it won't be used against yourself? 

  In reality, you have no guaranty whatsoever about anything. As a company, you would have to be suicidal to share trade secrets or any type of confidential information with a AI. Same for a lawyer, a doctor, a politician... In fact, this is true for almost anybody: Sharing information with a AI may eventually be dangerous. 

  That and that alone may very quickly balloon as a major problem and impediment to further AI adoption in many sensitive cases.  Wait for a few high profile cases...

Authored by Martin Young via CoinTelegraph.com,

OpenAI could be legally required to produce sensitive information and documents shared with its artificial intelligence chatbot ChatGPT, warns OpenAI CEO Sam Altman.

Altman highlighted the privacy gap as a “huge issue” during an interview with podcaster Theo Von last week, revealing that, unlike conversations with therapists, lawyers, or doctors with legal privilege protections, conversations with ChatGPT currently have no such protections.

“And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s like legal privilege for it... And we haven’t figured that out yet for when you talk to ChatGPT.”

He added that if you talk to ChatGPT about “your most sensitive stuff” and then there is a lawsuit, “we could be required to produce that.”

Altman’s comments come amid a backdrop of an increased use of AI for psychological support, medical and financial advice.

“I think that’s very screwed up,” Altman said, adding that “we should have like the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”

Sam Altman on This Past Weekend podcast. Source: YouTube

Lack of a legal framework for AI

Altman also expressed the need for a legal policy framework for AI, saying that this is a “huge issue.” 

“That’s one of the reasons I get scared sometimes to use certain AI stuff because I don’t know how much personal information I want to put in, because I don’t know who’s going to have it.”

He believes there should be the same concept of privacy for AI conversations as exists with therapists or doctors, and policymakers he has spoken with agree this needs to be resolved and requires quick action. 

Broader surveillance concerns 

Altman also expressed concerns about more surveillance coming from the accelerated adoption of AI globally.

“I am worried that the more AI in the world we have, the more surveillance the world is going to want,” he said, as governments will want to make sure people are not using the technology for terrorism or nefarious purposes. 

He said that for this reason, privacy did not have to be absolute, and he was “totally willing to compromise some privacy for collective safety,” but there was a caveat. 

“History is that the government takes that way too far, and I’m really nervous about that.”

No comments:

Post a Comment

“Nobody Expected This to Happen by Alex Krainer” (Video - 18mn)

   A stunning interview of Alex Krainer which illuminates a completely different aspect of the Alaska "deal" which makes a lot of ...