By Jaydeep Bhattacharjee
On September 1, 2020, Microsoft launched a new AI-powered tool targeted at spotting deepfakes for analyzing videos and still photos with the help of a manipulation score which is generated by the tool. Named as Video Authenticator, it generates “a percentage chance, or confidence score,” which, says Microsoft, indicates if the media or photograph has been artificially manipulated by artificial Intelligence.
The term “deepfake” refers to images, videos, and audio, which are modified using AI tools. In simpler terms, deepfakes indicate faked or altered images or media using AI programming which are modified in elusory ways to show someone appearing to have said which they didn’t in reality or someone to be in someplace where they never had been. These morphed images or videos are often used to defame known people or instigate a large crowd. The technology which is used to create such ‘deepfakes’ is now much cheaper than before and easily accessible while there are few others that are too good to be termed as a “fake.”
According to Microsoft, Video Authenticator analyzes a photo or video and provides a percentage chance, and/or confidence score which tells us if the media has been artificially altered. In case of a video, this score is provided in realtime on each frame as the video is played. It detects the blending boundary of the image or video and subtle fading or greyscale elements that could not be detected by the human eye.
The Video Authenticator has been introduced by Microsoft under its Defending Democracy Program which is aimed at the increasing number of foreign influences and fake media spreads around the world over the last several years. According to Microsoft, while a single technology will not be able to decipher what is true and accurate in a video altered to near perfection, its Video Authenticator will help counter disinformation by detecting evidence of AI involvement in its creation or morphing. The company claims that Video Authenticator is a part of evolving technology; however, it expects the tool to be useful in events like the upcoming United States presidential election.
Microsoft says its Video Authenticator tool has been created by using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset. The company is partnering with the San Francisco-based AI Foundation so as to make the tool available for organizations involved in the election process this year which includes companies managing news and political campaigns. R&D division within Microsoft Research with its AI team and an internal advisory body gets the credit for the creation of the tool.
This year, Facebook brought out a deepfake detector which served up good and remarkable results, according to the industry experts. Adobe is also about to bring out “Authenticity System” which is used by the original creator to apply certain tags to a photo or image, and another checks those tags to inform the user if he is looking at the original or a manipulated version.