ChatGPT is a bad knowledge base, confirms new study
Just in case you needed it spelling out, ChatGPT probably shouldn’t be used for software engineering
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
There’s been (probably a little too much) chatter on the internet about howOpenAI’sChatGPT, and similar artificially intelligent (AI) chatbots, are going to change the way we approach work.
There’s also some doom associated with this: are AI chatbots going to make a mockery of academia? Do away with experts? Will they somehow foreshadowI, Robotor Skynet becoming real?
Now, experts at Purdue University, based in West Lafayette in the US, have finally, definitively answered this question in a thirteen-pagepaper(PDF), arriving at the hitherto unthought of conclusion that, no, AI chatbots do not know everything.
AI chatbots and factual disinformation
The paper takes software engineering queries as the base for its findings, comparing the veracity of ChatGPT’s answers with those of actual, real users of popular programming question-and-answer portal (essentially a dignified Yahoo! Answers) Stack Overflow.
The gratingly omnipresentchatbotwas fed 517 questions on the topic found on the site, and the results are incontrovertible.
52% of ChatGPT’s responses were incorrect, and, when we asked Stack Overflow to do the maths on this for us, they came back saying that 48% of the chatbot’s responses were correct.
FBI says AI is making it easier for hackers to write malware
This AI tool can steal your data just by listening to what keys you press
Ransomware attacks have doubled thanks to AI
Analysis - certainly not infallible
On this basis, we have to commit ourselves to throwing AI in the Caspian. We must respect the result. It started with Stanley Kubrick over 40 years ago and it ends here. A fabulous campaign by all involved.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
We can joke, but the resultsareclear: AI as a knowledge source doesn’t quite work, and the implications are obvious, and dangerous.
Even as per this study, a bizarre amount of people neither notice or care about the potential for information. In a sort of Pepsi/Coke blind taste test, 12 participants with different levels of programming knowledge failed to identify an AI-generated answer 39.34% of the time, while preferring what turned out to be a Stack Overflow response.
ChatGPT is often treated as infallible, even though it absolutely isn’t, because of the way answers are presented. The study found that even correct answers addressed all aspects of the question 65% of the time, and users often accepted incorrect information as truth because of “comprehensive, well-articulated, and humanoid” sounding responses.
ViaZDNet
Luke Hughes holds the role of Staff Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.
Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days
A new form of macOS malware is being used by devious North Korean hackers
Quordle today – hints and answers for Saturday, November 9 (game #1020)