Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Several platforms for communicating with artificial intelligence have turned out to be vulnerable to fraudsters, cybersecurity companies have warned. As a result, the correspondence of application users with neural networks may be available, and a virus may be embedded in a web page. Previously, such a trend was detected only in software that was written using AI, but now the number of applications created using artificial intelligence has increased, and accordingly, unresolved problems have migrated there. About what other risks have been discovered, see the Izvestia article.

Which applications have vulnerabilities found?

In the third quarter of 2025, there was a sharp increase in vulnerabilities in AI services - web applications based on artificial intelligence, while previously such cases had not been recorded. This was reported to Izvestia by the Solar 4RAYS Cyber Threat Investigation Center, Solar Group of Companies.

Рука на ноутбуке
Photo: IZVESTIA/Yulia Mayorova

"The cases were related to typical client problems, such as embedding malicious code into a web page and gaining access to internal application objects," said Sergey Belyaev, an analyst at the center.

In addition, one of the vulnerabilities allowed attackers to gain access to users' communications with the neural network.

Holes were discovered in Aibox, a platform for working with various neural networks, Liner, an artificial intelligence—based search engine, Telegai, a platform for participating in role—playing games with AI characters, Deepy, an open source AI assistant, and Chaindesk, a platform for creating a chat bot called ChatGPT AI for its website. Ai2 Playground is an online platform for creating and editing images using AI.

— Such vulnerabilities often occur when using artificial intelligence to generate code without subsequent thorough verification and testing, which is called vibcoding in the industry. It can be assumed that this surge was the result of increased excitement around the new trend in programming," said Sergey Belyaev.

The problem is urgent and arises due to disregard for standards and established principles of secure development, said Pavel Zakharov, a leading threat analyst at WMX.

Программист за работой
Photo: IZVESTIA/Dmitry Korotaev

"A solution [based on AI] is now being popularized in development that allows you to quickly get the requested result, which is often not checked or tested for security later," he said.

In such services, there are both classic web vulnerabilities and specific ones - a new class of web threats aimed at AI in the business logic of applications, the expert noted. It is called "promt injections".

Therefore, experts are increasingly detecting vulnerabilities related to the processing of user requests, leaks, incorrect configuration of access rights and storage of personal data, confirmed by Yuri Tyurin, technical director of MD Audit (Softline Group).

The main problem lies in the nature of the data that attackers can access, Pavel Zakharov said.

Inexperienced developers using AI create special risks, they are called "vibe coders," added Sergey Zybnev, a leading specialist in the Bastion IP vulnerability management department.

Мужчина смотрит в телефон
Photo: Global Look Press/Jaap Arriens/ZUMAPRESS.com

"They use ChatGPT and its analogues to create code without conducting a security check," she said. — The result is typical vulnerabilities that AI reproduces from training data. Another loophole for attackers is that companies do not have processes for the secure development and operation of AI systems. There is no monitoring of abnormal behavior of models, control of input data.

What can leak from a chat with a neural network

User correspondence with AI can contain confidential information, passwords, customer data, codes, business plans, and trade secrets, which is especially critical for corporate security, Pavel Zakharov emphasized.

AI models gain access to data, tools, and files, and any mistake in setting up security can lead to the leakage of confidential information, said Yuri Zyuzin, director of digital projects at BrandLab branding agency.

Серверная
Photo: IZVESTIA/Alexey Maishev

— These can be personal data of users, internal documents of companies, client databases, or even system passwords that the agent uses (neural network. — Ed.) accidentally "saw" it and passed it outside," the expert said. — The main risks are the automatic execution of malicious instructions, infected files, tool substitution and data leakage through "innocent" text commands.

In practice, AI chats are read by engineers, employees of training departments and security services, often even support staff, Ashot Oganesyan, founder of the DLBI data leak intelligence and monitoring service, recalled.

—And this is not to mention situations when a user unknowingly shares a chat in search results — that is, on the entire Internet," he noted.

To protect yourself when working with AI services, it is important to follow the principles of secure interaction similar to corporate cybersecurity standards, experts stressed.

— Firstly, confidential data, including trade secrets, passwords and internal documents, should not be transferred to language models, - said Yuri Tyurin. — Secondly, it is necessary to choose only proven platforms with transparent data processing policies and open vulnerability reports.

Искусственный интеллект
Photo: IZVESTIA/Sergey Lantyukhov

Regular cleaning of the dialog history in AI services and the use of different accounts for personal and work tasks will make working with AI assistants safer, said Sergey Zybnev.

Companies implementing AI should conduct regular audits of the models used, implement data flow analysis mechanisms, and monitor network activity to prevent leaks. In addition, it is important to develop a culture of "safe AI": to teach employees the principles of digital hygiene when working with neural networks.

Переведено сервисом «Яндекс Переводчик»

Live broadcast