
Microsoft and Westinghouse Nuclear are exploring ways to use artificial intelligence to accelerate the approval and construction of nuclear power plants in the U.S. But a new analysis from the AI Now Institute, cited by 404 Media, warns that this approach could introduce serious safety risks.
Generative AI as a tool for faster approvals
Building a nuclear facility requires navigating a lengthy and highly regulated licensing process designed to protect the public from radiation hazards. Although the system is complex and costly, 404 Media notes that it has historically kept nuclear accidents in the U.S. rare.
Growing energy demand driven by AI, along with increased interest from major technology companies, has prompted firms like Microsoft to look for ways to streamline that process. Microsoft reportedly plans to use AI to speed up licensing by training a large language model (LLM) on existing licensing documents and site data, then using it to automatically generate the required paperwork.
The Idaho National Laboratory, a Department of Energy facility, has already begun using Microsoft’s AI systems to «streamline» regulatory submissions. In a press release, the lab said the technology would assist in producing the engineering and safety analyses needed for construction permits and operating licenses.
Experts warn of major safety risks
Researchers at AI Now argue that delegating key elements of the licensing process to AI could erode safety standards. They stressed to 404 Media that nuclear licensing is not merely a document-generation exercise but an iterative and rigorous assessment designed to identify risks before reactors are built.
Heidy Khlaaf, the institute’s head AI scientist, indicated that relying on an LLM could dangerously oversimplify a process that exists to prevent catastrophic failures. Co-author Maya Guerra added that nuclear power has remained safe in large part because regulators and engineers spend extensive time reviewing, revising, and learning from past problems; removing or rushing these steps, she suggested, would undermine that safety record.
Parallel issues in the legal field highlight risks
Attempts to use AI for other technical, high-stakes documentation — such as legal briefs — have produced troubling results. Courts have repeatedly encountered filings generated with AI tools that reference nonexistent precedents, fabricate case law, or include factual errors. Researchers warn that a similar pattern of hallucinations or inaccuracies could occur if generative models were used to produce safety or engineering analyses for nuclear facilities.
Concerns about data security and proliferation
In addition to safety issues, the report raises security concerns. Khlaaf and Guerra told 404 Media that training AI systems on detailed nuclear information could heighten proliferation risks. They noted that Microsoft has sought access to historical datasets as well as real-time, project-specific materials — requests the researchers interpreted as an effort by AI vendors to obtain sensitive nuclear information.
Because nuclear technology is inherently dual use — equally applicable to power generation and weapons development — providing large volumes of technical data to commercial AI models could inadvertently aid malicious actors.