AI like ChatGPT processes information like humans with a brain disorder, study finds

Large language models like ChatGPT and LLaMA have become known for their fluent, sometimes eerily human-like responses. However, they also have a well-documented problem of confidently producing information that is outright wrong. A new study suggests that how AI processes information might have surprising parallels with the way certain human brain disorders function.

Researchers at the University of Tokyo explored the internal signal dynamics of large language models and compared them to brain activity patterns found in people with Wernicke’s aphasia, a condition where individuals speak in a fluent but often meaningless or confused way. The similarities were unexpected.

In the study, scientists used a method called energy landscape analysis to map how information travels within both human brains and AI systems. This technique was originally developed in physics, and here it helped them visualize how internal states move and settle.

In this photo illustration, the welcome screen for the OpenAI “ChatGPT” app is displayed on a laptop screen. Image source: Leon Neal/Getty Images

In both cases, they found erratic or rigid patterns that limited meaningful communication. Essentially, they found that AI models have similar patterns to individuals with aphasia. These patterns show that information sometimes follows internal paths that make it hard to access or organize relevant knowledge.

This sheds new light on how AI processes information. Despite their vast training datasets, models like ChatGPT can fall into what the researchers call internal “loops” that sound coherent but fail to produce accurate or useful responses. That’s not because the AI is malfunctioning, but because its internal structure may resemble a kind of rigid pattern processing, similar to what occurs in receptive aphasia.

The findings have implications beyond just AI, though. For neuroscience, they suggest new ways to classify or diagnose aphasia by looking at how the brain internally handles information, not just how it sounds externally. And this isn’t the first time AI has shown promise in helping treat medical conditions.

Researchers have also been working on AI that can detect autism, just by looking at how you grab things.

For AI engineers, this breakthrough may offer a blueprint for building systems that better access and organize stored knowledge. Understanding the parallels here might be the key to designing smarter and more trustworthy tools in the future, as well as finding new ways to work with brain disorders.

The post AI like ChatGPT processes information like humans with a brain disorder, study finds appeared first on BGR.

Today’s Top Deals

Today’s deals: Heybike ALPHA, $299 Apple Watch Series 10, $90 23-piece cookware set, more
Today’s deals: $150 AirPods 4 with ANC, $30 JBL speaker, $55 Ring Battery Doorbell, $279 Miele C1 vacuum, more
Today’s deals: $189 Apple Watch SE, 15% off Energizer batteries, $144 queen memory foam mattress, more
Today’s deals: $30 Anker waterproof speaker, $50 off Powerbeats Pro 2, $9 Angry Orange pet stain spray, more

AI like ChatGPT processes information like humans with a brain disorder, study finds originally appeared on BGR.com on Thu, 22 May 2025 at 20:25:00 EDT. Please see our terms for use of feeds.

Leave a Reply

Your email address will not be published. Required fields are marked *