Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Today, data privacy provider Private AI, announced the launch of PrivateGPT, a “privacy layer” for large language models (LLMs) such as OpenAI’s ChatGPT. The new tool is designed to automatically redact sensitive information and personally identifiable information (PII) from user prompts.
PrivateAI uses its proprietary AI system to redact more than 50 types of PII from user prompts before they’re submitted to ChatGPT, repopulating the PII with placeholder data to allow users to query the LLM without exposing sensitive data to OpenAI.
Scrutiny of ChatGPT increasing
The announcement comes as scrutiny over OpenAI’s data protection practices are beginning to rise, with Italy temporarily banning ChatGPT over privacy concerns, and Canada’s federal privacy commissioner launching a separate investigation into the organization after receiving a complaint alleging “the collection, use and disclosure of personal information without consent.”
“Generative AI will only have a space within our organizations and societies if the right tools exist to make it safe to use,” Patricia Thaine, cofounder and CEO of Private AI said in the announcement press release.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
“ChatGPT is not excluded from data protection laws like the GDPR, HIPAA, PCI DSS, or the CPPA. The GDPR, for example, requires companies to get consent for all uses of their users’ personal data and also comply with requests to be forgotten,” Thaine said. “By sharing personal information with third-party organizations, they lose control over how that data is stored and used, putting themselves at serious risk of compliance violations.”
Data anonymization techniques essential
However, PrivateAI isn’t the only organization that’s designed a solution to harden OpenAI’s data protection capabilities. At the end of March, cloud security provider Cado Security announced the release of Masked-AI, an open source tool designed to mask sensitive data submitted to GPT-4.
Like PrivateGPT, Masked-AI masks sensitive data such as names, credit card numbers, email addresses, phone numbers, web links and IP addresses and replaces them with placeholders before sending a redacted request to the OpenAI API.
Together, PrivateAI and Cado Security’s attempts to bolt additional privacy capabilities onto established LLMs highlights that data anonymization techniques will be essential for organizations looking to leverage solutions like ChatGPT while minimizing their exposure to third parties.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.