Deep fake and Disinformation: How AI Influences the Spread of False Information

12.6.2023

The age of information technology and the digital age have brought many innovations and changes. One of the most significant recent developments is the widespread interest in artificial intelligence (AI). This interest has exploded following the boom around ChatGPT in the last six months. This technology promises benefits in many areas, from medicine to transportation. However, AI also poses serious risks and dangers, especially in connection with the spread of disinformation and the creation of so-called deep fake videos.

What are deep fake videos? These are videos where people's faces or voices are artificially replaced by others. Thanks to advanced algorithms and machine learning, it is now possible to create realistic videos that are difficult to detect as counterfeit. In the past, deep fake technology was primarily used in the entertainment sphere, but recently its use has expanded to the area of disinformation and manipulation. After all, Sam Altman, the head of Open.AI, which developed the ChatGPT tool, was a witness at his hearing in the U.S. Senate when one of the listeners began his lecture by playing a fake recording of his own voice.

Disinformation is an old problem that AI takes to a new level. Thanks to the ability to generate convincing deep fake videos, disinformation campaigns can gain credibility and reach a larger audience than ever before. Politicians, business leaders, or other public figures can be targeted in attacks where their faces are transferred onto another body or their statements are directly imitated. These forgeries can have serious impacts on political processes, societal stability, and trust in the public sphere.

Deepfakes can also impact corporate affairs. Imitating a manager's voice and appearance can lead to financial leaks. Cybersecurity is therefore becoming increasingly important.

Another problem is the speed at which disinformation spreads thanks to AI. With the help of sophisticated algorithms, fake news, disinformation, and conspiracy theories can be spread on social networks and other platforms almost instantly. Previously, it was possible to question and stop the spread of this information, but today AI allows the creation and dissemination of disinformation on such a scale that it is difficult to find the truth and prevent the spread of lies.

We have to deal with these new challenges. The first step is to recognize deep fake videos and disinformation. There are tools that help detect deep fake videos, and organizations fighting disinformation are developing techniques for detecting and recognizing false information. Leading programs and companies dealing with this issue include DeepTrace Technologies, Sensity, Truepic, Amber Authenticate, and the AI Foundation.

Another important aspect is public education. It is important for people to be familiar with how AI works and what its possibilities and risks are. Informed users are less prone to spreading disinformation and are better at recognizing fake news. Educational institutions and media organizations should play a key role in raising awareness and knowledge about AI and disinformation.

Regulation is another important aspect in the fight against disinformation.The European Union has adopted a new AI Act, which emphasizes the protection of citizens and the reduction of risks associated with the use of AI. The Act establishes rules for the use of AI, including the area of disinformation and deep fake technologies. One of the key points of the Act is the prohibition of creating and disseminating deep fake videos that can damage the reputation of individuals or disrupt democratic processes. Regulation of AI-created deep fakes is found in Article 52 of the EU AI Act, which requires transparency for deep fake generated images that contain a known place, person, or object.

The AI Act also requires AI system and platform providers to implement measures to detect and label deep fake videos and disinformation. These measures are intended to protect users and ensure transparency in the use of AI.

In the Czech legal system, artificial intelligence is governed by general legal regulations such as the civil, criminal, and copyright code. These laws may contain provisions that relate to aspects of AI, such as intellectual property rights or liability for damage.

For example, in the area of personal data protection, GDPR is followed, which sets rules for processing personal data, including those obtained through artificial intelligence. Another important law is the Cybersecurity Act. It sets necessary protective measures and requirements for the protection of information systems and cyberspace.

The danger of artificial intelligence and its influence on the spread and creation of disinformation in society is a reality that we must take seriously. It is up to us to deal with this problem and create an environment in which truth is protected and disinformation is revealed. Education, cooperation, responsibility, and regulatory measures, such as the European Union's AI Act, are key elements in the fight against disinformation and maintaining trust in the information environment that AI brings.

https://artificialintelligenceact.eu/the-act/

https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

Petr Prucek collaborated on the article.

70+
countries

60+
advisors

15+
years of experience in the market