A team of researchers at Stanford University has shown that artificial intelligence can design a functional virus, raising both excitement and alarm across the scientific community. The experiment, which has yet to undergo peer review, demonstrated that AI-written DNA could be used to assemble viruses capable of killing specific bacteria.
While the study highlights the potential for AI-driven medicine, including the creation of new treatments for infections, experts warn it could also open the door to misuse. The same technology could allow bad actors to design bioweapons at unprecedented speed, leaving governments and healthcare systems struggling to keep pace.
The Stanford Experiment
The researchers employed an AI model called Evo, trained exclusively on millions of bacteriophage genomes—viruses that infect bacteria. Evo generated hundreds of potential DNA sequences for a well-studied bacteriophage, phiX174, which targets E. coli.
Out of 302 AI-generated genome candidates, 16 produced active viruses after being chemically synthesized. These artificial viruses successfully infected and killed E. coli strains, with some proving even more lethal than the naturally occurring version.
Promise and Peril
Experts say this breakthrough underscores a double-edged reality. On the one hand, AI could accelerate the development of antivirals, antibodies, and vaccines. On the other hand, the same tools could be weaponized if used irresponsibly.
Tal Feldman, a Yale Law School researcher with a background in AI, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech, warned that the world is “now living in an era where AI can create working viruses.” They stressed that without safeguards, adversaries could exploit open data on human pathogens to create novel biological threats.
The Need for Rapid Response
If AI shortens the timeline for bioweapon development, governments will need to shorten the timeline for countermeasures. The Feldmans argue for:
-
High-quality shared datasets to speed up AI-based medical research
-
Public infrastructure to manufacture AI-designed medicines, since the private sector lacks incentive to build emergency-only facilities
-
Regulatory reform, including fast-tracking authorities that allow provisional use of AI-generated treatments under close monitoring
Proceeding with Caution
Despite the breakthrough, experts urge caution. The study has not yet passed peer review, and it remains unclear how easily others could replicate the results. Still, with healthcare infrastructure under strain and AI adoption accelerating in government sectors, the development has triggered calls for urgent global safeguards.
The ability of AI to create viruses could be a turning point in both medicine and security. Whether this technology becomes a force for healing or harm depends on how quickly safeguards, infrastructure, and policies are put in place.