Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Artificial intelligence (AI) could significantly change cybercrime in 2026, experts have warned. It attracts intruders because it allows them to carry out various attacks faster, and also reduces the threshold for entry into illegal activities, which is becoming a big problem for cybersecurity specialists. For details on how hackers will use AI in the next 12 months, how it will change the landscape of cyber threats and what will help combat them, see the Izvestia article

How artificial intelligence attracts cybercriminals

Attackers often use artificial intelligence to generate phishing emails, write malicious code, and automate some stages of cyber attacks, says the head of BI in an interview with Izvestia.ZONE Threat Intelligence Oleg Skulkin. In other words, AI attracts hackers because it allows them to carry out attacks faster, and also reduces the threshold for entry into cybercrime.

Хакер
Photo: IZVESTIA/Sergey Konkov

— Attackers use AI to create malicious scripts, generate phishing pages and, in general, automate cyber attacks. This is confirmed by reports from AI vendors and information security companies," said Vladislav Tushkanov, head of the Machine learning Technology Research group at Kaspersky Lab.

Neural networks can analyze a huge amount of data in a very short period of time, adds Alexandra Shmigirilova, GR director of the Security Code Information Security company. If you give an AI the task of making a portrait of a person based on his social networks and other open information on the Internet, then with the correct task, the AI will not only be able to create a general portrait of a person, but also separately indicate his interests, weaknesses and strengths.

Клавиатура
Photo: IZVESTIA/Sergey Konkov

Thus, criminals no longer have to spend a lot of time to study a potential victim. They just write a specific promo and get a full portrait, along with tips on how and what to say so that the person's attention is guaranteed to be attracted.

How neural networks will change cybercrime in 2026

By 2026, cybercriminals are likely to bring the use of AI to an industrial scale, says Ekaterina Edemskaya, an analyst engineer at Gazinformservice. Deepfake will become even more dangerous: realistic voice and video fakes in real time will make it possible to deceive even biometric verification systems, for example, to access bank accounts or corporate networks.

Камера
Photo: Global Look Press/Shireen Broszies

—Neural networks will automatically scan for vulnerabilities not only in software, but also in smart devices, from surveillance cameras to medical devices, creating chain attacks on critical infrastructure," predicts the Izvestia interlocutor. — Neural networks that independently design malware, adapting to antiviruses on the fly, will pose a particular threat. Such attacks will be cheaper and more accessible, which will attract "cyber—hires" without a technical background.

In addition, according to Ekaterina Edemskaya, in 2026 there may be massive attacks on AI assistants: hackers will introduce Trojans into neural networks to intercept confidential user requests. The growing number of IoT devices and 5G networks will open up new entry points, and automation through AI will make hacking almost deserted - one operator will be able to control thousands of infected devices simultaneously.

Oleg Skulkin agrees that cybercriminals will continue to use neural networks for their own purposes next year. However, the expert notes that currently none of the AI technologies known to cybersecurity experts suggests that any breakthrough in the use of neural networks in attacks will occur in the near future.

Кибератака
Photo: IZVESTIA/Sergey Konkov

"The most likely direction for attackers to use AI will be to optimize and accelerate various cyberattack processes," the expert notes. — This may lead to an expansion of their scale.

How hackers use Artificial Intelligence today

Currently, attackers use AI for various purposes: for example, the Rare Werewolf cyber group uses language models to create malware in PowerShell and C#, says Oleg Skulkin. And to write simple malware, attackers can use models such as GPT, DeepSeek, Gemini, and Claude.

GPT
Photo: IZVESTIA/Anna Selina

— There are cases when attackers tried to speed up the attack process, — says the interlocutor of Izvestia. — A striking example is the recent case of using the Claude neural network to attack many different organizations. The attack was recorded by an American company engaged in AI research.

According to Oleg Skulkin, the attackers misled the model by setting a lot of small tasks that look legitimate at first glance. So, step by step, the AI performed the necessary actions, which then allowed the cybercriminals to carry out the attack. It is important to note that the neural network made mistakes that the attackers had to correct manually. However, due to the fact that the attackers completed most of the tasks through AI, they managed to carry out the attack much faster.

In turn, Ilya Polyakov, head of the code analysis department at Angara Security, notes that, in addition to phishing texts, cybercriminals today actively use deepfake technologies (video and voice) to deceive employees and biometric systems, as well as AI tools for developing polymorphic malicious code that does not cause antivirus signature modules to work.

Дипфейк
Photo: Getty Images/Tero Vesalainen

"A full—fledged ecosystem of malicious AI tools has been functioning on the black market for a long time," adds Alexandra Slinko, head of the Information Security group at the Digital Economy League. — For example, there is a criminal analogue of ChatGPT, a monthly subscription to which costs about 15 thousand rubles. The illegal program does not restrict the user in any way in requests and is ready to help both with the development of malicious applications and with writing fraudulent emails.

What will help to combat AI threats from cybercriminals

The possibility that in the future attackers will continue to use AI to increase the effectiveness of their attacks cannot be ruled out, Vladislav Tushkanov believes. However, it is important to keep in mind that this approach does not fundamentally change the existing cyber threat landscape.

"Hackers have used phishing, scams, malware, and fraudulent messages before,— explains Izvestia's source. — The use of AI by intruders does not allow creating fundamentally new cyber threats, and modern security solutions effectively cope with existing ones. In addition, AI is actively used in the field of information security.

Nevertheless, the practice of using AI by attackers can slightly increase the volume of cyber attacks and the speed of their implementation, expand the capabilities of intruders, and lower the threshold for entry into the industry. In such conditions, paradoxically, the active use of AI by cybersecurity specialists is becoming one of the most promising areas for improving the effectiveness of cyber defense, notes Vladislav Tushkanov.

Провод
Photo: Global Look Press/Sergey Lantyukhov

When protecting against attacks using social engineering methods using AI, a large share of the responsibility lies with users, so the first step to countering cyber threats is to increase digital literacy, adds Daria Lavrova, senior analyst at the Positive Technologies research group. Phishing emails and messages, as well as audio/video calls using deepfakes, have characteristic markers. These are attempts to put pressure on the victim's emotions (to frighten them with monetary losses/ criminal prosecution, to cause excitement by promising a large win or a valuable gift, and so on), to provoke her to quick and thoughtless actions, accompanied, as a rule, by impersonal text, attachments, and QR codes.

— Companies need not only to conduct regular employee training, but also to improve their security mechanisms, in particular, through the use of more efficient AI capable of detecting and preventing attacks in autopilot mode. The effectiveness of such AI should be based on verified hypotheses about future cyber threats, and not on a belated reaction to the already well-known and widely used AI practices of cybercriminals, the expert concludes.

Переведено сервисом «Яндекс Переводчик»

Live broadcast