The dark side of AI: Assessing the top cyber threats to Kazakhstan

Published
Chief Analyst at Kursiv Research
AI cybercrime
Image generated by a neural network, photo editor: Arthur Aleskerov

Hackers are often among the first to adopt artificial intelligence (AI), and they’re already putting it to work. The consensus in the industry is that generative AI acts as a force multiplier for cybercriminals, greatly expanding their capabilities.

To better understand how this technology is reshaping both cyberattacks and cyber defense, Kursiv Research examined insights from experts in the Russian and Kazakhstani markets. One of the most alarming events in Kazakhstan in recent months was the leak of personal data belonging to 16 million citizens.

Why AI is strengthening cybercriminals

Nearly every major 2024 cybersecurity report identifies AI use by attackers as a key trend shaping the industry’s future. The takeaway: AI significantly boosts the productivity of threat actors.

It can write or fully generate malicious code, scan systems for vulnerabilities, and crack as many as half of the common passwords in under a minute. It can produce convincing deepfakes, craft more persuasive phishing emails, and perform open-source intelligence (OSINT) gathering at near-human accuracy but in a fraction of the time.

The authors of the CrowdStrike 2025 Global Threat Report describe this as a «force multiplier» — a military term for factors that dramatically increase the effectiveness of a weapon or unit.

Still, experts warn against panic. Roman Reznikov, an analyst with Positive Technologies, told the third International Cyber Festival PHDays (May 2025) that cybercriminals are currently using AI in limited ways.

«They’re experimenting with certain tasks, such as gathering information before an attack, generating pieces of malicious code, creating phishing texts, building fake websites, and producing deepfakes — the only ‘new’ form of attack so far,» Reznikov said.

«But integrating AI into cyberattacks is a complex process that requires high skill and specialized knowledge,» he added. «So far, criminals have seen the most real-world success in a relatively simple area: social engineering.»

How AI is changing phishing attacks on companies

One high-profile case from last year has become a textbook example of AI-driven cyber fraud.

An employee in the Hong Kong office of international engineering firm Arup received a phishing email that appeared to be from the company’s CFO in the U.K., requesting several large transactions. Suspicious at first, the employee’s doubts vanished after a video conference with «colleagues» from the London office, including the CFO.

Following their instructions, the employee transferred about $25 million to the attackers. Only later did investigators discover that every participant in the video conference, except for the victim, was an AI-generated deepfake created from real video and audio samples. The attack’s novelty lay in simulating a live business meeting in real time.

Consultants at KPMG have documented a surge in deepfake and «deepvoice» attacks. Between the first quarters of 2023 and 2024, they reported a 245% increase in the use of deepfakes in cyber incidents worldwide.

What cyber risks are critical for Kazakhstan?

The use of AI by cybercriminals is becoming an increasingly significant threat to Kazakhstan’s cybersecurity, and the data backs it up.

According to the National Computer Emergency Response Team (KZ-CERT), the number of recorded cybercrimes in the country has doubled for two years in a row. In 2023, KZ-CERT logged 34,500 incidents, up 107% year-on-year. In 2024, that figure jumped again to 68,100, an increase of 97%.

The most common threats are computer viruses, network worms and Trojans, which together accounted for slightly more than 45,000 incidents last year, or 66.1% of all cases. Botnet activity came in second with 10,800 incidents (15.9%), followed by phishing attacks with 3,900 incidents (5.8%).

KZ-CERT’s statistics include both successful and prevented attack attempts across Kazakhstan’s entire cyber perimeter, covering government agencies, critical infrastructure, the commercial sector, private companies and individuals.

The sharp rise in numbers is partly explained by changes in reporting.

«Starting in 2024, public statistics on incidents also include information security events that were previously recorded only in internal reports,» the KZ-CERT press office said.

Another factor is the rapid digitalization of Kazakhstan’s economy, which is expanding the number of targets for cyberattacks. Lower barriers to entry have also played a role.

«Automated attack tools have made it easier to engage in cybercrime. Many tools once available only to professional hackers can now be found in open or underground channels and require little technical skill,» KZ-CERT noted.

Local experts say AI is already being used in attacks.

«Hackers actively use GPT and other chatbots to automate their campaigns. For example, AI can generate phishing messages in both Russian and Kazakh, and it’s also used to search for vulnerabilities in the Kazakhstani segment of the internet. That’s why we’re seeing such a surge,» said Olzhas Satiev, president of the Center for Analysis and Investigation of Cyberattacks.

Evgeny Pitolin, co-chair of the information security committee at the Qaztech Alliance, believes the evidence is clear.

«For at least the past year, attackers have been actively using AI. They admit to writing new scripts and tools with AI, and statistics confirm it. We’re also seeing high-quality audio and video deepfakes created with AI, which is clear proof of the trend.»

The most alarming incident to date was the recent leak of personal data belonging to 16 million Kazakhstani citizens — including full names, birthdates, ID numbers, phone numbers and home addresses. While large-scale data leaks are not new, AI-driven OSINT tools make the threat more dangerous. Attackers can now target specific groups of people or entire organizations with greater precision.

AI, however, is not only a tool for criminals. It can also strengthen cybersecurity. Reznikov of Positive Technologies points to three key benefits:

  • Reducing workloads. AI can handle routine tasks such as the initial analysis of security events, freeing specialists to focus on more complex issues. Chatbots built on large language models can also provide quick decision-making support during incident response.
  • Anomaly and threat detection. AI is highly effective at processing massive amounts of data to detect patterns and flag unusual behavior.
  • Automating defenses. AI systems can make real-time decisions, respond to attacks, and even prevent incidents before they occur.

Still, Reznikov stresses that AI doesn’t eliminate the need for basic cyber hygiene. Strong password policies, vigilance and resistance to social engineering — even when it involves convincing deepfakes — remain essential.

Read also