11h05 ▪
3
min read ▪ by
A research conducted by the Massachusetts Institute of Technology reveals a major flaw in artificial intelligence (AI): its inability to correctly grasp negation. This gap could have dramatic consequences in critical sectors such as healthcare.


In Brief
- Modern AI consistently fails to understand the words “no” and “not.”
- This flaw poses significant risks in the medical and legal fields.
- The issue lies in training methods that favor association over logical reasoning.
AI Still Doesn’t Understand the Words “No” and “Not”
A research team led by Kumail Alhamoud, a PhD student at MIT, conducted this study in partnership with OpenAI and the University of Oxford.
Their work reveals a disturbing flaw: the most advanced AI systems systematically fail when faced with negations. Renowned models like ChatGPT, Gemini, and Llama constantly favor positive associations, ignoring clearly explicit negation terms.
The medical sector perfectly illustrates this issue. When a radiologist writes a report mentioning “no fracture” or “no enlargement”, AI risks misinterpreting this vital information.
This confusion could lead to diagnostic errors with potentially fatal consequences for patients.
The situation worsens with vision-language models, these hybrid systems that analyze images and texts together. These technologies show an even stronger bias towards positive terms.
They often fail to differentiate between positive and negative descriptions, increasing the risk of errors in AI-assisted medical imaging.
A Training Problem, Not a Data Issue
Franklin Delehelle, a research engineer at Lagrange Labs, explains that the heart of the problem does not lie in the lack of data. Current models excel at reproducing responses similar to their training, but struggle to generate truly new replies.
Kian Katanforoosh, a professor at Stanford, explains that language models work by association, not by logical reasoning. When they encounter “not good”, they automatically associate “good” with a positive feeling, ignoring the negation.
This approach creates subtle but critical errors, particularly dangerous in legal, medical, or human resources applications. Unlike humans, AI fails to overcome these automatic associations.
Researchers are exploring promising avenues with synthetic negation data. However, Katanforoosh emphasizes that simply increasing training data is not sufficient.
The solution lies in developing models capable of logical reasoning, combining statistical learning with structured thinking. This evolution represents the major challenge of modern artificial intelligence.
Maximize your Cointribune experience with our “Read to Earn” program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
Passionné par le Bitcoin, j’aime explorer les méandres de la blockchain et des cryptos et je partage mes découvertes avec la communauté. Mon rêve est de vivre dans un monde où la vie privée et la liberté financière sont garanties pour tous, et je crois fermement que Bitcoin est l’outil qui peut rendre cela possible.
DISCLAIMER
The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.