← All articles

Generative AI Fakes: Executives Are at Risk of Being Cloned

Dr. Lisa PalmerFebruary 28, 20234 min read
4 min read
XLinkedIn

Have you seen the breakthrough FakeCatcher that uses facial blood flow to separate deepfakes from reality with 96% accuracy? With the threat of increasingly higher quality Generative AI fakes being used in complex schemes, these fake images, videos, and audio clips where a person is replaced with someone else's likeness desperately need detection.

Deepfakes are THE go-to tool for driving the spread of misinformation.

The Evolving Deepfake Landscape

The continually improving deepfake landscape now includes interactive and composed deepfakes. Interactive deepfakes offer the illusion of talking to a real person. Compositional deepfakes are made by bad actors who create many deepfakes to compile a synthetic history: terrorist attacks that never happened, fictional scandals, or generating fake proof to support a conspiracy theory.

Now, deepfakes of business leaders and celebrities including Elon Musk, Tom Cruise, and Leo DiCaprio among others, have shown up in advertisements, often without their permission.

In 2022, people all around the world saw the Ukrainian President ask his soldiers to surrender in a deepfake video.

Deepfakes in Politics

Despite deepfake videos of political candidates in ads being illegal in California and Texas, this is not the case in most states. North Carolina fought this problem repeatedly during the 2022 races:

  • Mail ads showing legislators with "defund the police" shirts that they did not wear
  • A candidate shown in front of a police lineup wall, even though he was never arrested
  • A TV ad featuring a deepfake video of an opposing candidate saying something that he never said

How Does FakeCatcher Work?

It is based on photoplethysmography (PPG), which is a method to determine the change in blood flow in human tissue. If a real person is on screen, their tissue will change color ever-so-slightly microscopically as blood is pumped through their veins. Deepfakes cannot replicate this change in complexion (at least not yet).

Positive Uses for High-Quality Deepfakes

Tools like Synthesia allow professionals to create a virtual twin of themselves. This tool allows professionals to create engaging content, that both looks and sounds like them, to be used for employee training, education, ecommerce, and more. This technology is used by some of the most trustworthy companies in the world.

How Do We Identify Reality from Fiction?

  • Many deepfakes are so good that tech like the FakeCatcher is the only way to tell the difference. Computers can catch discrepancies that humans cannot.
  • Context is critical. Is there something to be gained by smearing someone's reputation? Was the content posted by an opposing party? If so, then assume that it could be fake.
  • For lower-quality fake videos, look for glitches, no blinking or too fast blinking, eyebrows that do not move, and anything that looks "off" to you.

Dangers of These Digital Forgeries

  • Mob reactions to fictional events
  • Fooling photo-identification systems: imagine someone using a deepfake photo of you to open an online banking account or to access your real account
  • Deepfake voices can be used to alter someone's voice to sound like yours. Are you in customer service and think you are speaking with one of your biggest customers? It may be someone posing as them.
  • A deepfake video of you on social media announcing personal information that is entirely fabricated
  • Your company's reputation and financial valuation at risk if your executives are seen making wildly racist comments in fabricated footage

Key Takeaways

  • Tech development will have to continue to advance to ensure that good outpaces bad.
  • Do not automatically trust what you see or hear.
  • Build a PR response plan NOW so that you can quickly respond if your staff or executives are targeted.
  • Added online security measures are needed for both your personal and business accounts. This is necessary to protect you and your business.

Dr. Lisa Palmer
Dr. Lisa Palmer

CEO & Co-Founder

Lisa wrote the book on AI adoption, literally. Her Wiley-published research, the largest qualitative study of enterprise AI adoption, shapes the frameworks neurocollective uses to help organizations move past AI ambition into measurable outcomes.

Research, AI Leadership