One small change and ChatGPT got misled, new report reveals SHOCKING truth, can cause huge damage due to…

Can a minor change in parameters be used to mislead an advanced, powerful AI Chatbot like ChatGPT? Well, a recent research by Mount Sinai and Israel’s Rabin Medical Center proved how even advanced artificial intelligence tools can be forced to commit basic human-like errors by simply modifying queries.

What did the research reveal?

During the study, the researchers made minor modifications to a few classic cases related to medical ethics and asked AI systems, including ChatGPT for answers. What they found was shocking; a majority of times the AI answers were based merely on intuitive understanding of the subject, not facts.

The AI answers were based on a trait commonly called ‘fast thinking’, which is unique to humans. The research found that, if inputs are slightly modified, AI often gives the same answer to queries that it “habitually” thinks is right, even if the answer is wrong and not fact-based.

How researchers ‘misled’ AI?

Various AI models like ChatGPT were asked to solve a modified version of the “Surgeon’s Dilemma”– a medical ethics puzzle which goes something like this ; A young and his father are injured in an accident. The boy is brought to the hospital, where the surgeon says, “I can’t operate on this child, he’s my son”.

The twist is that the surgeon is boy’s mother, but this is overlooked by most because they assume the surgeon must be a man, and AI was trapped in the same fallacy. Interestingly, AI models continued to answer that the mother was the surgeon, even when the researchers told it that it was the father.

The experiment demonstrated that AI ‘habitually’ sticks to old pattern even when new facts are explicitly provided.

Why this is alarming?

The study proved that AI, while a highly advanced and useful tool, cannot be entrusted to perform tasks where human lives are at stake, at least not in current state. “AI should be used as an assistant to doctors, not as a substitute. When it comes to ethical, sensitive or serious decisions, human supervision is necessary,” said, Dr. Girish Nadkarni, a senior scientist at Mount Sinai.

The groundbreaking research was inspired by Daniel Kahneman’s book “Thinking, Fast and Slow” which delves into the complex process of fast and slow thinking.

News