My bingo card for this month did not include OpenAI telling the world that future frontier AI models coming to ChatGPT will know how to make bioweapons or novel biothreats, but here we are. We can add this capability to the growing list of issues that give us reason to worry about a future where AI reaches superintelligence.
However, it’s not as bad as it sounds. OpenAI is giving us this warning now to explain what it’s doing to prevent future versions of ChatGPT from helping bad actors devise bioweapons.
OpenAI wants to be in control of teaching advanced biology and chemistry to its AI models rather than ensuring ChatGPT never gets trained with such data. The better understanding of biology and chemistry ChatGPT has, the easier it’ll be to assist humans in devising new medications and treatment plans. More advanced versions of ChatGPT might then come up with innovations on their own once superintelligence is reached.
Providing support for creating bioweapons is just a side effect. That’s why OpenAI’s work on ensuring ChatGPT can’t provide assistance to anyone looking to make improvised biothreats has to start now.
The post Here’s why ChatGPT needs to know how to make bioweapons appeared first on BGR.
Here’s why ChatGPT needs to know how to make bioweapons originally appeared on BGR.com on Fri, 20 Jun 2025 at 18:30:00 EDT. Please see our terms for use of feeds.