AI can embrace simple and complex jobs that may include numerical computations and diagnosis of diseases among others. Also for AI to be truly beneficial in the long-term, then it has to be developed responsibly. That is why the discussion of generative AI and privacy is relevant and that is why we would like to contribute to this discourse based on our experience from the trenches and from talking to regulators and other industry stakeholders.
Google privacy for AI
In a new policy working paper titled “Generative AI and Privacy,” we make the case that privacy and safety should be designed into AI from the outset. That is why we suggest policy solutions that respond to privacy risks and enable AI opportunities.
AI is said to bring positive impacts to people and society, yet has risks of worsening existing problems and posing new ones, as are seen in our studies and other research.
The same can be said about the aspect of privacy. It is necessary to incorporate measures that enable or ensure control and visibility regarding certain areas of risk such as the unconscious transmission of personnel data. This can only be achieved by having a sound development-to-deployment framework that is anchored on proven principles. Any organization developing AI solutions needs to understand its privacy stance.
Google is said to offer best practices in data protection, Privacy & Security Principles, Responsible AI practices, and our own set of AI Principles. This means we employ rigorous privacy protection measures and limit the collection and processing of user data to the barest minimum necessary, inform the user on how their data is being handled, and give them tools to control it.
However, there are serious concerns to raise, if we apply some widely accepted principles of privacy to generative AI
- Training and development
- User-facing applications
It is almost beneficial to include personal data in models as, for instance, how to interpret names in different cultures, as to increase accuracy and performance. Privacy threats such as personal data leakage are more apparent at the application level and so is the possibility of developing better protection mechanisms.
Prioritizing such safeguards at the application level is not only the most possible approach but, in our view, the best one. Many modern discussions on AI privacy are centered around managing the risks and are well justified given the task of establishing trust in AI. But they also allow for much better user privacy, and we should also take advantage of these important opportunities.
There are already certain generative AI applications that are used to analyze privacy feedback for multiple users and recognize privacy compliance problems. Cybersecurity methods have significantly improved through the use of artificial intelligence.
Privacy-preserving technologies such as synthetic data and differential privacy are shedding light on how we can bring even more value to society yet not compromise personal data. Laws and industry practices should encourage such beneficial applications.
The same is applicable in the modern world where AI stakeholders seek to ensure optimal protection of privacy while at the same time allowing other rights and societal objectives to be met.