AWS has thrown its weight behind the upcoming Deepfake Detection Challenge, a competition with a $10m prize pool that Microsoft and Facebook announced in September.
Facebook is helping create a large dataset of deepfake videos for the competition, which aims to unearth novel techniques to detect video that’s been manipulated with AI and to develop systems to benchmark detection technologies.
Cloud giant AWS on Monday announced it will offer up to $1m in AWS credits over the next two years towards the challenge, which launches this December and runs through to the end of March 2020. The initiative is backed by Facebook, Microsoft, the Partnership on AI, and several universities, including MIT.
The initiative comes amid heightened concerns that deepfakes and, more generally, misinformation on social networks could be used to manipulate public opinion ahead of the US 2020 presidential election schedule for November.
Facebook, for example, this week launched Facebook Protect to help candidates, elected officials and public-sector workers enable two-factor authentication for publishing Facebook Page information.
It’s also recently suspended thousands of apps to avoid a repeat of the Cambridge Analytica scandal, which may have influenced the outcome of the 2016 US presidential election.
Participants in the Deepfake Detection Challenge will be given access to a dataset to train their models and then need to submit code to a ‘black-box’ environment for testing. And to do that, researchers are going to need access to compute power, storage and machine-learning tools that AWS has.
Amazon says it will host the full competition dataset when it becomes available later this year. It’s also providing Amazon machine-learning experts to help teams build their detection models.
“We want to ensure access to this data for a diverse set of participants with varied perspectives to help develop the best possible solutions to combat the growing problem of deepfakes,” AWS said.
Deepfakes are seen as dangerous in part because they don’t require any skills to produce, thanks to smartphone apps that can do a good enough job to fool humans with fake video. In the short term, deepfakes could be used to sway public opinion, but over the long term deepfakes could erode trust in all information.
The project itself comes with certain risks. As the project notes in an FAQ, there is a risk that malicious actors try to exploit the dataset Facebook is creating as well as the code that participants create.
According to AWS, the Deepfake Detection Challenge steering committee will share the first 5,000 videos of the dataset with researchers. There will be a “targeted” technical working session at the International Conference on Computer Vision (ICCV) in Seoul beginning on October 27, 2019.
After conducting due diligence, it will release the full dataset and launch the Deepfake Detection Challenge at the world’s primary AI conference – NeurIPS or the Conference on Neural Information Processing Systems – in December.
Google hasn’t backed the Deepfake Detection Challenge but it and Alphabet’s Jigsaw have contributed a dataset of 3,000 manipulated videos with 28 actors to the FaceForensics benchmark, which aims to become an automated benchmark for facial-manipulation detection. FaceForensics forms part of the competition’s initial 5,000 video dataset.
More on deepfakes and security
This is a syndicated post. Read the original post at Source link .