A New Study Shows Chat GPT Unable to Decipher Between Fact and Fiction

“AI is still in its infant stage, and we are learning what we can do. It is in a bubble right now like Bit Coin and the .com boom was. Everyone sees a ton of possibilities, but it’s not there yet." Market and Technology Analyst Peter Shankman said.

The researchers found that overall, machines were less likely to identify a false belief from a true belief, with older models faring worse.

Shankman says AI Chat Bots are designed to be nice and not necessarily to be factual, if for example, someone asked if they should jump off a cliff. He says if that were the case, it would direct you to a suicide hotline.

The study revealed AI models released during or after May 2024 scored between 91.1% and 91.5 % accuracy when identifying true or false facts, compared to between 84.8% and 71.5% for their older counterparts.

“Will it get there? For what AI can do right now, it still requires a human to hold its hand. Treat AI like it is learning. At the end of the day, its still very much a child that needs that parental support. Another words, check the information it gives you before you submit to a boss, a client, or a judge for example.” He said.

Shankman says AI should be taken with a grain of salt and used to assist people and not to be used solely for factual informational purposes. He adds AI will require more energy and water resources, it may not be worth pouring more resources into the technology, when all information should be fact checked and reviewed several times before accepting the information at face value.


Sponsored Content

Sponsored Content