الثلاثاء، 24 سبتمبر 2019

Deepfake Tech Can Now Anonymize Your Face to Protect Privacy

Deepfake videos have demonstrated their applications in entertainment—both acceptably and controversially—but these general adversarial networks (GANs) still have a long way to go before they offer convincing results. This has led to a lack of practical applications and plenty of paranoia, but we’re beginning to see efforts to employ deepfake technology in ways that can help people protect themselves. A recent paper published at the International Symposium on Visual Computing demonstrates how deepfakes could help protect the right to privacy before they become a tool used to cause harm.

The paper utilizes face-swapping to anonymize the speaker’s appearance. Although the authors were not the first to consider this application, initial work simply transplanted expressions onto an existing face that consented to the swap. This new method, instead, replaces someone’s existing face with a uniquely generated one from a data set of 1.5 million face images. In theory, the new face won’t match any face in reality.

Image credit: DeepPrivacy

While the GAN produces suitable results for photos, it still struggles with replacing faces in video. This is likely because the network has to generate a “new” face for each frame. Maintaining consistency for a non-existent face isn’t an easy task in theory or in practice.

Image credit: DeepPrivacy

For the purposes of anonymizing a subject in a video, however, a glitchy look doesn’t matter too much. After all, the purposes of this GAN isn’t to fool anyone but rather obscure a person’s face without losing their expression. By blocking out a person’s face with a box (as seen on the left side of the GIF above), we can’t identify them but we also know very little about what they’re attempting to communicate.

In circumstances where anonymity is vital but expression can make a difference, such as anonymizing the appearance of sources in the news or documentary films that could put a subject at risk by revealing their identity, this method could be very useful and employed today. Its only notable issues include glitches that occur in poor lighting conditions or when the subject makes significant movements. Further work will likely resolve these problems in the coming years, but applying this method for its expected purposes won’t easily encounter these drawbacks. After all, interview subjects typically do not make any significant movements, and lighting conditions are controllable more often than not. Besides, when it comes to poor lighting correction, there’s already an AI for that as well.

We can already forge voices with enough precision to successfully impersonate other people to the tune of $243,000 in theft, so anonymizing voices certainly does not create an additional hurdle. We’ve never required artificial intelligence for altering voices and more thorough processes for vocal anonymity exist as well. Now, we have a good start with video. If you want to try it for yourself, you can access the source code on GitHub.

Now read:



sourse ExtremeTechExtremeTech https://ift.tt/2mKO7my

ليست هناك تعليقات:

إرسال تعليق