AI in the legal spotlight
Artificial intelligence: legal control or voluntary restraints?
In the context of the transformation of the current world order and the formation of new centres of power, it is AI technologies that can provide parties with an advantage in most areas. These include science and communications, the information space and the arts, industry and transport, medicine and agriculture, finance and military affairs, space and materials science. It is no coincidence that the development of rules in the field of AI has become a priority for the global community, and it is important to develop them together. What is the global practice in this area?

The President of Belarus,
Aleksandr Lukashenko,
“With the ability to self-learn, this tool [artificial intelligence] can destroy humanity if it is let out of control… On the one hand, modern technologies create thousands of new opportunities and prospects. On the other hand, they generate many risks and threats — fake news, disinformation, attacks on critical infrastructure.”
At the summit of the Collective Security Treaty Organisation, on November 28th, 2024
Aleksandr Lukashenko,
“With the ability to self-learn, this tool [artificial intelligence] can destroy humanity if it is let out of control… On the one hand, modern technologies create thousands of new opportunities and prospects. On the other hand, they generate many risks and threats — fake news, disinformation, attacks on critical infrastructure.”
At the summit of the Collective Security Treaty Organisation, on November 28th, 2024
Risk levels
The law, proposed by the European Commission on April 21st, 2021 and approved in 2024, aims to establish a common legal and regulatory framework for the use of AI.According to its developers, the purpose of the 48-page document is ‘to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence’. It also includes the classification and rules for regulating AI-enhanced applications, depending on the risk of harm to the user. Three categories have been identified: prohibited systems (with an unacceptable risk level), high-risk systems, and other AI systems.
Prohibited systems include those that use AI for subliminal (subconscious) manipulation or exploitation of people’s vulnerabilities, which may lead to physical or psychological harm.
IT products ‘representing a significant threat to health, safety or fundamental human rights’ are proposed to be classified as high-risk under the EU AI Act.
Systems that do not fall into the first two categories are not subject to any regulation, while member states are largely deprived of the possibility of further regulating them due to maximum harmonisation.
Italian approaches
Among the world’s nations, the palm of victory in legally enshrining AI governance principles belongs to Italy, which adopted a comprehensive law in this area on September 18th this year. “Artificial intelligence is the greatest revolution of our time,” declared Prime Minister Giorgia Meloni. However, she stated that technologies can only achieve their potential within a framework of ethical rules that focus on people and their rights and needs.The law was developed and debated by Italian parliamentarians over the course of a year and was passed by a majority vote.
As The Guardian notes, the law consists of six chapters, comprising 28 articles. It establishes general principles for the research, testing, development and application of AI systems and models, delegating to the government the power to adopt legislative decrees to bring Italian legislation into line with the EU AI Act and to regulate the use of data, algorithms and mathematical methods for training systems.
Italy has set up a co-ordination committee in charge of developing relevant policies for all types of structures linked to the field of digital innovation and artificial intelligence.
The law gives considerable attention to the protection of copyright. This concerns, in particular, works created with the help of AI, provided that they are the result of the author’s intellectual effort. The use of works or materials available online using AI tools is permitted only if they are not protected by copyright or are intended for scientific research and the protection of cultural heritage.
Foundation for development
Serious work in the field of legislative regulation of AI is actively underway in the post-Soviet space. It is worth noting that by Resolution No. 58–8 of April 18th of this year, the CIS Interparliamentary Assembly adopted a model law On AI Technologies. This is, in essence, a recommended foundation for the development of national legislation in participating countries.The document, consisting of eight chapters uniting 38 articles, is designed to regulate ‘social relations concerning all stages of AI life cycle, including research, design, development, evaluation and verification, operation and maintenance, monitoring and control, disposal and others provided for by national legislation’.
It also enshrines the principles of regulating relations in the field of AI technologies, including the priority of human rights and freedoms, technical reliability and safety, transparency and control over the functioning of AI, as well as protection of personal data.
It is proposed that the level of responsibility for offences in the field of AI be determined and implemented in accordance with the national legislations of the participating countries.
Global nature
In the context of the formation of new centres of global power, AI technologies are becoming an important geopolitical factor. Created by human intelligence, they can become both a powerful means for development and a source of global problems.Today, the moral and ethical components of society, its maturity and ability to take responsibility and to solve philosophical and purely practical issues are particularly relevant. The problem is complicated by the fact that it is global in nature, and its resolution contradicts the goals of Western elites to ‘robotise the consciousness’ of the individual and to achieve total control over society.
We, however, need to direct efforts to ensure that AI remains a reliable means of creation and development, and does not become an instrument of destruction and annihilation of civilisation.
WHAT IS BANNED IN THE EU?
In the EU, activities of AI system providers and organisations using AI are regulated. The use of AI for emotion monitoring in workplaces and schools is prohibited, while its use for sorting job applications and improvements to generative AI tools (ChatGPT) are restricted.MILITARY EXCEPTIONS
IN-DEPTH ELABORATION
By Nikolai Buzin, Chairman of the Standing Commission on Human Rights, National Relations and Mass Media of the House of Representatives, Doctor of Military Sciences, Professor