Will AI dumb us down? Kazakhstan’s plan to safeguard critical thinking

Published
General News Correspondent
The country needs to revise educational standards so that children’s cognitive functions do not decline / Photo: Shutterstock, photo editor: Dastan Shanay

Studies suggest that frequent use of artificial intelligence (AI) is contributing to a decline in cognitive functions. In response, Kazakhstan plans to regulate AI to ensure that children do not lose the ability to think, analyze and engage in critical thinking, according to a recent presentation of the draft AI law in the Mazhilis, the lower house of Kazakhstan’s parliament.

MP Ekaterina Smyshlyayeva, the author of the draft law, stated that research shows frequent AI use can diminish critical thinking and analytical skills, posing a potential threat to children’s cognitive development.

«Just because you’re using AI assistants doesn’t mean you can turn your brain off,» she said. «They assist at the basic text stage, but then a person is supposed to engage in critical thinking, i.e. analyzing truth and falsehoods, refining content, seeking additional sources and asking follow-up questions to ensure quality. If we incorporate this second step of critical thinking, we can mitigate or even prevent the risk of cognitive decline.»

To address this issue, educational standards need to be revised. While current curricula focus on gathering and analyzing information, future education should prioritize teaching students how to critically evaluate it.

The proposed regulation specifies that if any restrictions on AI use are introduced, compliance will be the responsibility of parents, not children.

«The proposed legislation will address this issue through a comprehensive approach, implementing effective and enforceable bans,» Smyshlyayeva added.

Another AI-related concern is the spread of deepfakes, which can mislead the public. Smyshlyayeva noted that existing articles in the Administrative and Criminal Codes can already be applied to deepfakes. However, penalties are not imposed for the creation or distribution of a deepfake itself, but rather for its content.

«If it offends a person’s honor and dignity, manipulates public opinion or is used for fraudulent activities that result in material damage, then legal responsibility applies,» she said.

A special task force will review the issue of deepfakes and establish its position within about a month.

Since the draft law is still in its early stages, specific measures were not announced at the presentation. However, lawmakers plan to classify AI-related risks as minimal, medium or high and will regulate only medium- and high-risk systems, as these have the most significant implications for public and state security. Additionally, they intend to create a national platform for AI development, which will receive state investment. MPs also aim to strengthen personal data protection measures.

«The new regulation establishes requirements for developers, mandating that they inform users when they are interacting with AI technology,» Smyshlyayeva said. «However, this issue is nuanced. The task force must foster a constructive discussion because not all AI-powered products can or should be labeled, given their rapid growth.»

The task force developing the draft legislation comprises approximately 150 members, including MPs, representatives from state agencies and subject matter experts.

Read also