Technologists got us into this mess…
Deep fakes have got the world creeped out. Photo manipulation techniques joined with AI are enabling people to create very convincing, very fake images and videos. The authenticity of audio is also up for grabs, and in fact in this paper, “Text Based Editing on Talking Head Videos,” researchers demonstrate a technique for editing a speaker in a video by editing the transcript, eliminating the need for editors to have access to the original speaker. That work was done by Adobe Research, Max Planck Institute for Informatics, Stanford University, and Princeton University. The writers highlight the dangers of what is clearly a powerful capability for video editors and, in their paper, the authors provide suggestions for general use including reveal that the tool was used and get permission from the original speaker, but they know those safeguards will be ignored, even by well-meaning but busy editors, so they call on the development community to start building defensive tools:
“Finally, it is important that we as a community continue to develop forensics, fingerprinting and verification techniques (digital and non-digital) to identify manipulated video. Such safeguarding measures would reduce the potential for misuse while allowing creative uses of video editing technologies like ours.”
Researchers are fighting back to give people the ability to check the validity of photos, videos, and soundtracks; the press has strengthened its restrictions on editing. But, while the scientists work on technological cures, a public awareness campaign is also necessary so that people look at the information right in front of their eyes and learn to question it.
A major catalyst for awareness was the appearance of drunk Nancy Pelosi videos, featuring footage of Nancy Pelosi that was slowed down and had the pitch altered to disguise the manipulation. The video was shared, liked, and talked about leaving an indelible impression with the people who were inclined to think badly of Pelosi anyway.
That particular fake is significant because it broke out of the usual hiding places for malicious content and was widely seen and commented on by the press and interested people. As a result, the potential for faked content became obvious to many more people. Also, the work being done by the technology community is also getting some publicity.
Fixing faces
Adobe has teamed with UC Berkeley to develop a methodology to identify fake images, and, appropriately enough, they’ve been able to adapt Photoshop’s Face Aware Liquify feature to create an approach for recognizing edits to images. It’s appropriate because that’s the tool people are using to subtly change facial expressions. The work was sponsored by the DARPA MediFor (media forensics) program, which has been targeting fake media since 2015.
Researchers Richard Zhang and Oliver Wang from Adobe and Sheng-Yu Wang, Dr. Andrew Owens, and Professor Alexei A. Efros set themselves three goals:
- Can you create a tool that can identify manipulated faces more reliably than humans?
- Can that tools decode the specific changes made to the image?
- Can you then undo those changes to see the original?
In a blog post from the Adobe communications team, the company explains why the company has taken on the challenge:
“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology. Trust in what we see is increasingly important in a world where image editing has become ubiquitous—fake content is a serious and increasingly pressing issue. Adobe is firmly committed to finding the most useful and responsible ways to bring new technologies to life—continually exploring using new technologies, such as artificial intelligence (AI), to increase trust and authority in digital media.”
It’s a start: researchers at Adobe and Berkeley wrote a script to use Face Aware Liquify to alter thousands of faces found on the Internet. They used a CNN to identify the changes, and then suggest changes to undo their handiwork. The image on the left is the happier version of the lady on the right. The image second to the right is the machine’s suggested undo.
They were pretty successful given that they knew exactly how the software worked and they knew what they had done to change the faces. “The idea of a magic universal ‘undo’ button to revert image edits is still far from reality,” says Richard Zhang in Adobe’s blog entry, but he says their work has to be done in a world where it’s becoming harder to trust the digital information we consume.
Addressing videos
Shruti Agarwal, a graduate student in Computer Science from UC Berkeley, had noticed that Barak Obama is very consistent in the way he talks and moves his head and she realized it might be possible to create a profile of a person’s manner of speech, gestures, and facial movements to create an identifying profile of a person that can be applied against a possible fake video.
Agarwal and her advisor Hany Farid have been searching for digital forensic tools that can help identify video fakes. Farid and Agarwal presented their work at the Computer Vision and Pattern Recognition Conference held in Long Beach earlier this year. They noted that fake videos are becoming an expanded tool for spreading false information and one that can be more convincing than fake news stories.
Press associations and major news outlets have already been policing photographs for suspicious content. The associated press has a strict set of rules for photos submitted with stories. Here is a summary of the rules:
The content of a photograph must not be altered in PhotoShop or by any other means. No element should be digitally added to or subtracted from any photograph. The faces or identities of individuals must not be obscured by PhotoShop or any other editing tool. Only retouching or the use of the cloning tool to eliminate dust and scratches are acceptable.
Minor adjustments in PhotoShop are acceptable. These include cropping, dodging and burning, conversion into grayscale, and normal toning and color adjustments that should be limited to those minimally necessary for clear and accurate reproduction (analogous to the burning and dodging often used in darkroom processing of images) and that restore the authentic nature of the photograph. Changes in density, contrast, color, and saturation levels that substantially alter the original scene are not acceptable. Backgrounds should not be digitally blurred or eliminated by burning down or by aggressive toning.
We need these rules. Researchers have found that even when asked, most people are not good at spotting faked photos, and that’s when the fakes aren’t even very sophisticated. In a 2018 report, Sophie J. Nightingale of the University of Warwick, Coventry, found the most people are pretty bad at identifying faked photos. In her studies, Nightingale used passport pictures to see if subjects could recognize image morphs and also real-world scenes. And that’s when we have a reason to look. Most of the time we accept images and videos as real. It’s not surprising, we sort of believe in Ewoks and Spiderman.
These studies are coming as the world is ready to head into the US 2020 elections and after Europe has had a particularly feisty election period for the EU. More than ever, we’re going to need tools to sort out the fake from the real.
What do we think?
It’s interesting to see what has happened in media in the last 20 years. After journalism crashed in the oughts, the election apocalypse of 2016 has made people recognize the need for responsible journalism with traceable sources. As the printed word seems to be losing its power, the visual is going to have to have to conform to controls as well.
Adobe has stepped up to try and provide tools even as it has developed the tools that fool us. In fact, it seems as if Adobe is almost developing on two tracks: one to make great effects and another to detect the tools used.
It’d be nice to hear from Google, Facebook, Amazon, AMD, Nvidia, and Intel about the work they’re doing and funding to combat fraud via photos and videos.
In the long run, it’s up to us. Spotting fake digital content is another form of literacy, a muscle we’re going to have to learn how to locate and how to use. Most of us have learned how to spot misleading news reports and slanted reporting, thanks to generations of manipulation by the press and governments and “enthusiasts” on all sides of any issue. That education has gotten supercharged by social media and the Internet, which is showering us with information.
Now, we’re going to have to learn to spot what’s “off” about images and videos. Luckily we can build on the training we’ve already had.
There’s not much hope for people who willingly sacrifice their skepticism in favor of spreading content that supports their views, and for people who are just intellectually lazy. Snopes has been around for decades, but we all get shared emails sharing some outrage from people who should know better.
The very least we can ask is that those who are enabling the technology are also willing to fight its misuse.