Preparing for the Threat of Deep Fakes

In 2018, hobbyists experimenting with machine learning and artificial intelligence reached a remarkable technological achievement with deeply problematic ramifications: the first widely circulated deep fakes. In a short amount of time, deep fakes have become a matter of public knowledge. They have generated widespread fear for their breach of personal privacy and potential for disinformation. Now, as the technology becomes more accessible and effective, deep fakes present a distinct threat to businesses.

What are Deep Fakes?

Deep fakes are synthetic video or audio clips created by feeding many samples of a person’s image or voice into a machine learning program. All of this data teaches the program patterns of movement or speech and gives it enough reference to create convincing fake media.

In a sense, deep fakes hijack an individual’s likeness and control it to serve a new purpose. From the moment deep fakes reached the public, the technology raised fears of being manipulated—either as the subject of a deep fake or as a victim of a deep fake’s deception. Perhaps one of the most startling implications of deep fakes is the idea that our most trusted senses – sight and sound – may no longer be trustworthy.

At first the threat of deep fakes seemed largely hypothetical, as the technology was still in its infancy and early deep fakes were expensive, labor intensive, and rough around the edges. Expanded interest and popularity, however, has yielded dramatic improvements to the efficacy and availability of deep fakes.

Protecting Against Deep Fakes

Expert cybersecurity forecasts have called deep fakes “the next big threat to businesses.” Deep fake scams are poised to become a major piece of phishing scams. Something as simple as a faked voicemail from an executive can lend credibility to a phishing attempt that would otherwise raise suspicion. The following steps can potentially help organizations avoid falling victim to a deep fake scam.

  • Since the technology and its capacity is so new, protecting against deep fakes begins with educating and alerting staff to the possibility of deep fake attacks.
  • Implement multi-channel verifications for money transfers. For example, call or email confirmation for a transfer request that could have been delivered by a deep fake.
  • Train staff to remain especially skeptical during remote work.

As scammers are getting their hands on convincing deep fakes, SDC CPAs advises businesses to act with caution and consider the possibility of losses to deep fakes—especially as we have yet to see the full capacity of deep fake scams.

Posted in: