Skip to main content

Facebook Researchers Say They Can Detect Deepfakes And Where They Came From

caption: This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said.
Enlarge Icon
This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said.
AP

Facebook researchers say they've developed artificial intelligence that can identify so-called "deepfakes" and track their origin by using reverse engineering.

Deepfakes are altered photos, videos, and still images that use artificial intelligence to appear like the real thing. They've become increasingly realistic in recent years, making it harder to detect the real from the fake with just the naked eye.


The technological advances for deepfake productions have concerned experts that warn these fake images can be used by malicious actors to spread misinformation.

Examples of deepfake videos that used the likeness of Tom Cruise, Former President Barack Obama, and House Speaker Nancy Pelosi went viral and have shown the development of the technology over time.

"Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with," research scientists for Facebook Xi Yin and Tal Hassner wrote Wednesday.

The work was done in conjunction with Michigan State University.

Facebook's new software runs deepfake images through its network. Their AI program looks for cracks left behind in the manufacturing process used to change an image's digital "fingerprint."

"In digital photography, fingerprints are used to identify the digital camera used to produce an image," the researchers explained. Those fingerprints are also unique patterns "that can equally be used to identify the generative model that the image came from."

The researchers see this program as having real world applications. Their work will give others "tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research. " [Copyright 2021 NPR]

Why you can trust KUOW