- Home
- Blog
- Perspectives
- 10 Ways AI Puts Your Privacy at Risk
10 Ways AI Puts Your Privacy at Risk
Why Is AI Raising Privacy Concerns?
The public's exploding interest in generative AI, such as ChatGPT and DALL-E 2, has rightfully raised privacy concerns. These technologies use AI algorithms to generate text, images, and videos, often based on large datasets of existing content. However, there is a risk that these generative AI systems could be used to generate fake news, deep fakes, or other forms of manipulated content, potentially compromising individuals' privacy and safety. Since these systems typically require access to large amounts of personal data, it is prudent to be concerned about data privacy and security.
10 Specific Examples Where AI Creates Privacy Concerns
1. Health Data Analysis
AI has the potential to analyze vast amounts of health data to reveal sensitive information about individuals' health and predispositions to certain diseases. This raises concerns about the privacy of such information and how it may be used or shared without individuals' consent. Imagine your health insurance and life insurance companies having your family's multi-generational detailed health history used to determine what they charge you.
2. Employment Decisions
AI algorithms used in employment decisions can lead to discrimination against certain groups based on factors such as gender or race. For example, if an AI algorithm is trained on a biased dataset, it may discriminate against certain job candidates unfairly. There is a famous example where Amazon's AI hiring tool eliminated female candidates because the AI model learned, based on years of data, that women were rarely in engineering roles, and therefore concluded women must not make good engineers. Clearly, this model inadvertently perpetuated a current problem.
3. Smart Home Devices
Smart home devices collect and analyze detailed data about users' daily lives and routines, raising concerns about privacy and surveillance. For example, smart home devices can monitor when people are home, what appliances they use, and even their conversations, which can compromise their privacy and create a risk of surveillance. Now, imagine that the government BUYS access to your private in-home conversations. This is already happening.
4. Social Media Algorithms
Social media algorithms often reinforce existing biases and preferences, creating filter bubbles and limiting exposure to diverse viewpoints. As a society, we are already experiencing extreme philosophical division and distrust of government that has been fed by this poor application of AI algorithms. Additionally, these algorithms can manipulate user behavior through targeted ads or misinformation leading to individuals making decisions based on false or misleading information, or being exposed to harmful content that they may not have otherwise encountered. Google (owner of YouTube) came under Supreme Court scrutiny regarding their algorithm sharing ISIS videos. This ruling, based on Section 230, could upend the internet.
5. Autonomous Vehicles
This is not just about fully automated vehicles. It also includes systems inherent in many of today's late-model autos. Autonomous vehicles collect sensitive information about passengers' location, travel patterns, and biometric data. Self-driving cars are vulnerable to various cybersecurity threats, including ransomware attacks that could render the vehicle inoperable until a ransom is paid. You could not enter, start, or exit your own car. Additionally, hackers could disable a car's networks, range sensors, and cameras, leading to multiple collisions and other safety risks. Your car could even be re-routed to where criminals await you. Another potential threat is the hacking of an autonomous vehicle's operating system, which could expose personal information stored on other connected devices, meaning hackers could potentially access everything on your phone.
6. Facial Recognition Technology
Facial recognition technology has the potential to compromise individuals' privacy and anonymity in public spaces. This is because such technology can be used to track individuals' movements and activities without their consent. For a reason to lay awake at night, check out what Clearview AI does using the 20+ billion images they scraped from the internet, including social media applications.
7. Biometric Data
Collecting sensitive data used to uniquely identify individuals and track their movements and activities holds enormous privacy risks. Examples of biometric data include fingerprints, facial recognition scans, and retinal scans, which can be used to identify individuals without their consent. Have you tried out the Lensa app? Do you use your face to access your phone? Have you traveled through an airport or across a country border? These are just a few examples of points where your facial data has been stored.
8. Workplace Monitoring
Monitoring employees' activities, including personal data such as health and emotional states, raises significant privacy concerns. It is now commonplace, largely brought about by the work-from-home pandemic trend, that employers are monitoring you. They know when you are logged in, what you are typing, and they are even analyzing your facial expressions.
9. Financial Services
The use of sensitive personal data such as income, spending habits, where you live, and financial history raises concerns about data privacy and security. In 2022, U.S. Bank employees were found to have unlawfully accessed customers' credit reports and sensitive personal data to apply for and open unauthorized accounts. Although they were ordered to pay $37.5 million, the harm done to their customers lingers. AI worsens possible situations like this by providing financial institutions with even more sophisticated tools for collecting, analyzing, and using sensitive personal data. For example, AI algorithms could be used to predict customers' creditworthiness based on personal data, such as who you associate with. Imagine being turned down for a home loan because an algorithm determines that your relatives or acquaintances have too much influence over you.
10. Education Technology
The extensive collection of personal data about students, beginning in Pre-K and continuing through college, creates a detailed profile that can compromise their privacy, autonomy, mental health, access to credit, and future employability. The addition of education technology in the classrooms has brought with it the capture of incredibly personal student data including their personal strengths and weaknesses, their behaviors, and their home situations. Imagine these students as adults who experience lowered creditworthiness because they had behavior challenges when they were 5-10 years old.
Student data is highly sought after by bad actors. Identity thieves target students because credit checks are rarely run on children so this crime can remain undetected for decades. Further, a story was posted about a Minnesota school district whose data had been hacked. In addition to basic demographic and individual data, compromised files included highly sensitive records related to sexual violence allegations, student discipline, special education, civil rights investigations, student maltreatment and sex offender notifications.
What happens when we layer AI capability on top of this treasure trove of data about our kids? AI algorithms create detailed profiles of students' academic progress, behavior, and personal lives. These profiles are used to make decisions about students' futures, such as college admissions or job opportunities, perpetuating bias and discrimination. Additionally, when this data falls into the wrong hands, it is often used for cyberbullying and ransom purposes.
Should AI Use Be Regulated to Protect Privacy?
Privacy is a daunting issue that AI exacerbates for everyone living in an advanced society, even though most are unaware of it. AI is everywhere in our daily lives, from Siri on our phones to AI systems that make decisions about our homes, jobs, and health. Unfortunately, no comprehensive US laws or regulations protect us from AI discrimination and privacy intrusions. Although there is a patchwork of federal executive orders, national initiatives, and specific use case regulations, accompanied by some states with disparate laws, privacy still looms as a huge concern. Expert-informed, comprehensive legislation is needed to ensure that data and AI controls exist while also encouraging innovation.
Closing
As an AI proponent, there is tremendous potential for AI to solve some of the world's biggest problems: cancer, economic inequality, geo-political tensions, and more. But pragmatism about the dangers it poses is equally important. One of the most pressing dangers is privacy. Generative AI has brought the realities of this issue to the attention of the public. Despite AI's enormous potential to improve our lives, it is important to approach its development and deployment with caution and consideration for the protection of privacy rights.