The introduction of the ChatGPT iOS app has amplified the ongoing discourse on privacy, underscoring the demand for governmental regulations in AI advancement
Privacy advocates have been issuing warnings to consumers about the potential privacy risks associated with generative AI apps ever since ChatGPT was introduced by OpenAI. The recent availability of the ChatGPT app on the Apple App Store has sparked a renewed wave of caution.
In an article on Tech Radar, Muskaan Saxena cautioned users to exercise caution before fully engaging with the bot to avoid compromising their privacy. She emphasized that the iOS app comes with a clear tradeoff that users should be mindful of, which includes the explicit statement: “Anonymized chats may be reviewed by our AI trainer to improve our systems.”
However, simply relying on anonymization does not guarantee privacy. Anonymized chats have personal information removed to prevent them from being directly linked to specific users. Nonetheless, as Joey Stanford, the Vice President of Privacy and Security at Platform.sh, a Paris-based developer platform for cloud-based services, pointed out to TechNewsWorld, anonymization may not be sufficient to protect consumer privacy since re-identification can still occur by combining anonymized data with other sources of information.
Taking Privacy Seriously
According to Caleb Withers, a research assistant at the Center for New American Security, a think tank focused on national security and defense in Washington, D.C., if a user enters their name, workplace, or other personal information into a ChatGPT query, that data will not undergo anonymization.
In an interview with TechNewsWorld, Withers emphasized the importance of considering whether the information shared with ChatGPT is something one would feel comfortable sharing with an OpenAI employee.
OpenAI has assured that it prioritizes user privacy and implements measures to protect user data, as highlighted by Mark N. Vena, the president and principal analyst at SmartTech Research based in San Jose, Calif.
“However, it is always advisable to review the specific privacy policies and practices of any service you utilize in order to understand how your data is handled and what safeguards are in place,” Vena advised in the same TechNewsWorld article.
Built-In Protections
McQuiggan highlighted the concerning trend of users including sensitive information like birthdays, phone numbers, and addresses in their queries when using generative AI apps. He warned that if these AI systems are not adequately secured, unauthorized parties may gain access to the data and exploit it for malicious purposes such as identity theft or targeted advertising.
Furthermore, McQuiggan noted that generative AI applications have the potential to inadvertently disclose sensitive user information through the content they generate. Therefore, he emphasized the importance of users being aware of the privacy risks associated with using such applications and taking necessary measures to safeguard their personal information.
While mobile phones offer certain inherent security features that can help mitigate privacy breaches caused by running apps, McQuiggan acknowledged that relying solely on measures like application permissions and privacy settings may not provide comprehensive protection against all privacy threats, similar to any application installed on a smartphone.