Microsoft Deepfake

Microsoft announces deepfake detection tool

Tool can give media a percentage chance of it being artificially created.

Microsoft has announced a new tool that it claims can detect the presence of deepfake software in images and video as it seeks to tackle disinformation online. 

Deepfakes, or synthetic media, are photos, videos, or audio files manipulated by artificial intelligence (AI). And they’re becoming increasingly hard to detect. 

Used maliciously, deepfake technology can be used to make people appear to say things they didn’t or appear to be in place they weren’t, posing an emerging threat to public figures like politicians, but also to businesses when in the hands of sophisticated phishing scammers. 

The tech giant’s new software can rank media with a percentage chance or confidence score to give an indication of how likely it is the material has been artificially created. Microsoft hopes the solution can help to combat disinformation on the web “in the short run,” especially with the US election coming up in November. 

Developed alongside Microsoft’s responsible AI team and AI ethics advisory board, the tool works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye. 

Microsoft Deepfake Detection Tool

But the firm also notes that with deepfakes being generated by AI that continues to learn, it is “inevitable” that they will begin to beat conventional detection technology, as Microsoft stated:

We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods.

Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.

Deepfakes were recently regarded as the most dangerous threat posed by AI technology by a panel of experts in a report published by University College London (UCL). 

As deepfake technology continues to advance, specialists said that fake content would become more difficult to identify and stop, and could assist bad actors in a variety of aims, from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. 

Such uses could ultimately undermine trust in audio and visual evidence, the authors of the report said, which could have great societal harm.

Rapid Mobile

Rapid Mobile uses cookies, tokens, and other third party scripts to recognise visitors of our sites and services, remember your settings and privacy choices, and - depending on your settings and privacy choices - enable us and some key partners to collect information about you so that we can improve our services and deliver relevant ads.

 

By continuing to use our site or clicking I Accept, you agree that Rapid Mobile and our key partners may collect data and use cookies for personalised ads and other purposes, as described more fully in our privacy policy.

 

You can change your settings at any time by clicking Manage Settings or by visiting our Privacy Centre for more detailed information.

 

Privacy Settings saved!
Cookie Services

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies.Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site.

We track anonymized user information to improve our website.
  • _ga
  • _gid
  • _gat

Decline all Services
Accept all Services