Strong code: the use of neural networks will reduce data leaks by 10 times
Data leaks from applications will be reduced by 10 times thanks to neural networks. Domestic developers are already implementing AI for automated vulnerability detection in Russian applications, market participants told Izvestia. The neural network searches for weaknesses in the code, increases the efficiency of threat detection and speeds up checks five times. However, experts point out that artificial intelligence is also not immune to errors, so human participation remains necessary.
How AI can help reduce Russian data leaks
Domestic developers have begun to implement artificial intelligence for automated vulnerability detection in Russian applications, market participants told Izvestia. This approach is gradually becoming not just a technological trend, but an important part of a strategy to improve cybersecurity.
According to Anton Basharin, senior managing director of AppSec Solutions, today large language models (LLM) are able to efficiently find vulnerabilities, they do not just point to potentially dangerous areas, but are able to explain in detail exactly what the problem is and what changes will help eliminate the risk.
— In real projects, the combination of secure development and automated analysis dramatically reduces the number of critical vulnerabilities in products. With this approach, white hacker reports show significantly fewer problems compared to products without such practices," the expert said.
According to him, with cautious optimism, it is possible to assume a reduction in leaks by about 10 times in a few years, but only if there is an integrated approach: automatic AI analysis, secure development processes and control over contractors. This way, gaps will be identified faster, critical bugs will be reduced, which means there will be fewer leaks.
Automating vulnerability search using neural networks can not only significantly reduce the number of data leakage incidents, but also change the very architecture of security approaches, Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications, federal coordinator of the Digital Russia party project, told Izvestia. In the next few years, the development of such technologies may lead to the creation of a national vulnerability monitoring platform operating in real time and integrated with the infrastructure of government agencies and large corporations, he believes.
However, artificial intelligence is also not immune from errors, so human participation remains necessary — a specialist must double-check the results of AI work to ensure the accuracy and reliability of the conclusions, the deputy added.
Mobile applications remain one of the most vulnerable areas for cybercriminals. For example, recently, a data leak occurred from the popular Tea service, where women anonymously share reviews about men: tens of thousands of images and user documents got online due to an open database.
How a business implements AI for code verification
Avito's head of information security, Andrey Usenok, told Izvestia that the company is already using generative AI in its code protection system. The new solution automatically identifies potentially sensitive data — passwords to databases, API keys, and access tokens that may pose a security threat if they end up in the company's open source code. According to him, the system detects 99% of all threats and allows you to instantly delete the found data from the code, which saves up to 25% of the security specialist's working time.
Similar approaches are being developed in other domestic IT companies. For example, Yandex Cloud also uses artificial intelligence-based solutions as part of the Security Deck service, which help identify data leakage threats, said Evgeny Sidorov, the platform's security director. The AI detects personal, payment and other sensitive data in public repositories, and offers instructions on how to eliminate risks, such as moving files or enabling encryption, he added.
MTS is also actively using AI — the company uses various tools to check computer code: AI programmer assistants help to find vulnerabilities and errors with high accuracy, as well as automatically create tests to verify the code, said the press service of MWS AI (part of MTS Web Services). To work with large language models, an additional module is used here — the so-called censor. It automatically verifies user requests and model responses, ensuring safe interaction with AI, the company concluded.
Kaspersky Lab, in turn, intends to enter the vulnerability management systems market in 2026. It is planned that the AI functionality of the solution will be complemented by the Sbera multi-agent GenAI system for automating infrastructure security checks.
Anton Basharin from AppSec Solutions believes that the use of LLM in secure development is still associated with certain difficulties: efficient operation requires a large amount of computing resources, preliminary preparation and normalization of data. In addition, such systems sometimes run slower than classic scanners.
Переведено сервисом «Яндекс Переводчик»