Deepfakes are audio and videos that are true, made with artificial intelligence (they take their name from deep learning) to deceive those who listen or watch them. They are highly convincing, since they contain words actually spoken by a person, but not in the context proposed in the video or in the audio.

Generally they concern known characters, but they can also be used on our video or audio fragments for creating fake voice messages.

At the moment it is still easy to identify most deepfakes, but deep learning, which is an incremental technology practice, is rapidly progressing and refining its ability to convince.

Moreover, elementary deepfakes are already within the reach of ordinary people, who without the need for investments or particular technological knowledge can create basic deepfakes.

A public list of tools to create deepfake makes the idea of the accessibility of technology.

As explained in a well-known Gastone Nencini, Country Manager of Trend Micro Italia, to generate a deepfake video technology learns to encode and decode two faces separately, for example that of a famous person who speaks and that of

Technology learns how to break down and rebuild the first face and merge it with the second. In this way the facial expressions of the original person seem to imitate those of the second person.

The same technology can be used to overlay another face on the person who is targeted for deepfake.

Ethical tricks and no

The technology, however, observes Nencini, can be used in a positive way, as in the field of film production, where the shooting of a scene can be avoided thanks to deepfake technology.

Another exemplary way of creating a deepfake, (ethically, in a stated way, for entertainment purposes and with a high craft contribution) is the one adopted by the artist Fabio Celenza, who in the television broadcast Propaganda Live offers satirical videos

But the same technique is used in a negative way to create content for adults who exploit celebrity faces without their consent.

Considering these uses, there are high concerns that this technology can be used regularly to direct elections, influence financial markets, ruin the reputation of people and generally for criminal activities.

Facebook, YouTube and Twitter have tried to ban the distribution of these content. A possible solution may be to require videos to be watermarked and digitally signed, which may help validate the content manufacturer.

Watching, watching, watching

The hope, then, is to arrive at a technically accurate way of reporting deepfakes before they are published.

But until reliable solutions are feasible, the only resource is to monitor.

Nencini explains that when you view such a content, you can adopt a three-point response scheme: stop, ask a question and report

Stop means to share or comment on videos if they seem somehow suspicious. Then you have to ask where the video comes from, if the person is really the character that appears in the video, because the person or company shares it online. Finally, it is useful to report to the site or to the app on which the content was displayed, if it is suspected.

The Internet is a common space and it is not governed by itself: everyone’s contribution is needed.

Leave a Reply

Your email address will not be published.

You May Also Like