Posted: 03.05.2024 11:18:00

Mind games

Artificial intelligence: scientific breakthrough, challenges and threats

The UN General Assembly has recently endorsed a resolution concerning artificial intelligence (AI). The document essentially is aimed at ensuring that all automated solutions and related processes are subject to human oversight. This is not the first and far from being the last response to the risks that the rapid development of technology is fraught with.

Moving forward

The first patient with a brain implant from Neuralink, Elon Musk’s neurotechnology company, has demonstrated his success and showed how he plays chess. Noland Arbaugh, a 29-year-old brain chip patient, said that he was paralysed in a diving accident about eight years ago, which is why he cannot feel anything below his shoulders. Now a man can use his brain power to move the cursor across the chessboard on his laptop screen, rearranging the pieces. The other day, the American billionaire announced another Neuralink’s novelty — the Blindsight implant, which should significantly improve the quality of life for blind people. The implant is already being tested on monkeys, and, according to scientists, it will be able to restore vision and even has the potential to surpass normal human vision in the long run.   
AI easily solves complex tasks, controls any processes, and generates economic and statistical forecasts. Experts predict that by 2050, more robots than human surgeons will perform surgeries in operating rooms worldwide. All this seems to open up amazing prospects. Yet, some aspects of the rapid development of AI can threaten humanity — if not with complete destruction, as science fiction and Hollywood depict, at least with major problems. 

How it all started

The speed with which AI is being introduced into various spheres of our lives is incredible. It makes diagnoses and defends in court, creates paintings and music, identifies faces, predicts the weather and business prospects. And this is just the beginning. According to forecasts, AI will soon take over almost all spheres of human life. If we let it, of course.
Back in 1950, English mathematician Alan Turing (the one who broke the code of the German Enigma cipher device during World War II) suggested that it would someday be possible for a computer to think. He also coined the term ‘artificial intelligence’ and described its theoretical and philosophical concept. 
Turing was prevented from going further in his research by the lack of technical capabilities and money. Computers at that time were very expensive and extremely bulky units that executed certain commands, but could not remember them. Only large technology companies and some prestigious universities could afford such a toy. The monthly cost of renting a computer reached $200,000. For comparison: studying at Harvard University at that time cost a little more than $1,000 per year.   
The first more or less understandable definition of AI was given by one of its founding fathers, Marvin Lee Minsky, “The science of making machines do things that would require intelligence if done by men.” The term AI finally caught on in 1956 after an academic conference at Dartmouth College in Hanover, the USA.
From 1957 to 1974, computers became cheaper and smarter. In 1970, Marvin Lee Minsky said in his interview with Life Magazine, “In from three to eight years, we will have a machine with the general intelligence of an average human being.” However, in addition to intelligence, AI still had to learn abstract thinking and self-knowledge.  
In the 1980s, computers began to slowly learn based on accumulated experience — a system for implementing logic programming was developed and improved.
The 1990s and 2000s were marked by the victory of AI over reigning World Chess Champion Garry Kasparov. During the same period, speech recognition software was introduced in Windows. Furthermore, AI learnt to decode human emotions — the Kismet robot, created in the late 1990s, could already recognise emotions through human body language and voice tone, and even simulate them. 
Now we live in an era of big data that humans cannot work with, but AI does a great job. Over the past 35 years, computer performance has increased a billion times. This means that a task the solution to which used to take decades is now solved in 0.3 seconds.

Risks and challenges 

AI is a powerful tool, though its intelligence still acquires signs of the intellect of people who develop its algorithms. Accordingly, AI receives from its ‘parents’ the nuances of their perception of the surrounding reality. 
Thus, a few years ago, Amazon had a scandalous incident with developers who created an AI-powered recruitment tool for the company. Designed to simplify the recruitment process by sorting CVs and choosing the most qualified candidate, AI turned out to be... a sexist and taught the Amazon system to be biased against female applicants and prefer male candidates. It was not possible to find out who imposed this hiring feature on the AI recruitment system.  
The US judicial system stumbled on the COMPAS algorithm, which predicts the statistical likelihood of rehabilitation or re-offense by criminal elements after being previously arrested, and is currently used in pretrial and sentencing. As a result of AI prediction, black defendants who do not recidivate were almost twice as likely to be classified as having a higher risk of repetition of crime (45 percent) compared to their white counterparts (23 percent). The basis for such conclusions was only the colour of the defendants’ skin. Accordingly, based on the AI’s findings, the sentences for black people were significantly higher than those for white criminals. 
The explosive development of AI coupled with improper control can pose increased threat to human security. Uncontrolled AI systems are already capable of making decisions that contradict human ethics and values. If such systems end up in the hands of criminals or unscrupulous creators or users without proper control, they can become extremely dangerous.
In addition, unemployment caused by the replacement of employees with AI may lead to the emergence of the modern Luddite Movement unless there appear mechanisms to protect people who are already being displaced from their usual niches in the labour market.
However, the major problem is that people tend to show excessive confidence in AI, its data and estimates, believing that the computer is always right.

Control is needed

In fact, AI’s ability to have a devastating effect on our lives is quite comparable to a nuclear bomb. There is now a fierce arms race ongoing in the development of AI — the one who is the first will rule the world. The only question is how long and in what form the world will exist.
In 1945, when the United States demonstrated the capabilities of the nuclear bomb, humanity tried to build an international security system trying to minimise the risks of a nuclear winter. Yet, the unharnessed development of AI can result in a large-scale disaster. This is because in the race for the AI world championship, there is no time left to thoroughly check the safety of new emerging systems, and few people are seriously concerned about the consequences of their use at all.  
Humanity should have curbed the uncontrolled AI expansion long ago, agree on new rules of the game and make AI an assistant, not a potential threat. Will we be able to do it in time?

By Alena Krasovskaya