Adobes New AI Tool Can Identify Photoshopped Faces

first_img The Internet cannot be trusted: Between doctored photos and deepfaked videos, there’s just no telling what is fact and fiction.In an effort to regulate the digital Wild West it helped usher in 30 years ago, Photoshop maker Adobe developed a new tool for identifying altered images.Researchers Richard Zhang and Oliver Wang—along with UC Berkeley collaborators Sheng-Yu Wang, Andrew Owens, and Alexei Efros—created a method for detecting edits made using Photoshop’s Face Aware Liquify filter.The function automatically distinguishes facial features, making it easy to adjust eye size, nose height, smile width, and face shape.Popular with photographers who didn’t quite capture the expression they wanted, the feature’s delicate effects “made it an intriguing test case for detecting both drastic and subtle alterations to faces,” an Adobe blog post said.“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology,” the company wrote.“Trust in what we see is increasingly important in a world where image editing has become ubiquitous,” it continued. “Fake content is a serious and increasingly pressing issue.”With that in mind, Adobe partnered with the University of California, Berkeley, as part of a broader effort to better expose image, video, audio, and document manipulations.Using pictures scraped from the Internet—as well as some modified by a human artist—the team trained a Convolutional Neural Network (CNN) to recognize altered images of faces.“We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered,” Oliver Wang said in a statement. “For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces.”Spoiler alert: It does.Flesh-and-blood people were able to ID the revised face 53 percent of the time (slightly better than chance), the neural network achieved results as high as 99 percent.The tool also pinpointed specific areas and methods of facial warping, and was able to revert images to what it estimated was their original state. The results, according to Adobe, impressed “even the researchers.”Adobe’s new tool was nearly twice as good at identifying manipulated images as humans (via Adobe/UC Berkeley)“It might sound impossible because there are so many variations of facial geometry possible,” UC Berkeley professor Efros said. ‘But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work.”This isn’t the end of fake news just yet, though.“The idea of a magic universal ‘undo’ button to revert image edits is still far from reality,” Zhang admitted, bursting our collective bubble. “But we live in a world where it’s becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.”“This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well,” Gavin Miller, head of Adobe Research, added.“Beyond technologies like this,” he said, “the best defense will be a sophisticated public who know that content can be manipulated—often to delight them, but sometimes to mislead them.”More on Geek.com:Deepfake Tool Makes It Easy to Put Words Into Someone’s MouthGoogle AI Turns Selfies Into Ethereal Poetry PortraitsThis AI Neural Network Gives Cats Creepy, Cute Names Stay on target McDonald’s Plans to Serve AI Voice Technology at Drive ThruCIMON Returns to Earth After 14 Months on ISS last_img read more