Microsoft has introduced software that can help identify deepfake photographs or videos, adding to the list of programs designed to battle the hard-to-detect images ahead of the US presidential election.
The new software of the Video Authenticator analyses a picture or of frame of a video, finding signs of distortion that may even be invisible to the naked eye.
Deepfake is images, videos, or audio clips that are manipulated to appear real using artificial intelligence, and are, already targeted by campaigns on Facebook and Twitter.
“They may seem to be making people say things they didn’t or places they weren’t,” a company blog post said on Tuesday.
Microsoft has unveiled that it has partnered with the San Francisco AI Foundation to make the video authentication tool accessible to political campaigns, news outlets, and those interested in the project.
Deepfake is part of the world of online misinformation, which experts have cautioned may carry false or misleading messages.
Fake messages and posts that seem to be genuine are particularly worrying ahead of November’s US presidential election, particularly after fake social media posts increased in number during the 2016 vote that took Donald Trump to authority.
Microsoft also revealed in a blog post that it has integrated technology into its Azure cloud storage platform that allows video or photo makers to add background data that can be used to verify whether images have been updated or not.
The technology giant said that it is planning to test the software with media companies like the New York Times and BBC.
“Practical media awareness will allow all of us to think critically about the media context and become more active citizens while still enjoying satire and parody,” said the Microsoft post.