Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

OpenAI's Chatbot, ChatGPT fools scientists more than 1/3 of the time
Hotair ^ | 01/15/2023 | Jazz Shaw

Posted on 01/15/2023 8:37:08 PM PST by SeekAndFind

Ever since OpenAI released its chatbot ChatGPT in November, it’s been making waves and attracting a huge number of users. (It must be a huge number. I haven’t been able to log in for a week because it’s always over capacity.) Some researchers have been putting the large language-model system through its paces and performing tests to see how effective or potentially dangerous it is. In one experiment, researchers asked a group of scientists to evaluate a number of research paper abstracts. Some of them were written by other scientists and post-graduate research students while others were generated by ChatGPT after being prompted to create such an abstract on a variety of scientific subjects. They aren’t saying that the chatbot is better than or even as good as a top human scientist, but the journal Nature reports that the scientists were unable to distinguish ChatGPT’s work from that of humans roughly one-third of the time.

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text.

The abstracts were all based on medical science reports taken from a selection of top journals including the New England Journal of Medicine. The scientists participating in the test didn’t just read the abstracts and offer an opinion on the quality of the work. They ran the abstracts through a plagiarism detector and an AI-output detector. (Not that I was aware that we have such things.)

So the main point here is that ChatGPT isn’t just good enough to fool human experts. It was able to regularly (though not consistently) fool the software used to identify work created by other AI models or by humans relying on the plagiarized work of others. That is of concern for a couple of reasons. We’re already seeing some schools ban access to devices capable of using ChatGPT over concerns of cheating by students. Universities rely on detection software to catch cheaters, but it looks like ChatGPT is quickly approaching the point where those tools will be useless.

ChatGPT does still make mistakes at times, to be sure. A friend of mine recently asked it to create a summary of the career of a relatively famous person. It produced four paragraphs of perfectly valid, applicable information composed in a quite professional style. It then added a fifth paragraph that contained a blatantly, provably false claim. My friend was able to immediately pick up on that being something of an expert in that field, but many people might not have.

To be clear, ChatGPT wasn’t trying to “sneak one past” my friend. It doesn’t “know” that it’s putting out false or inaccurate information. It’s simply finding matches and connections in its massive database. Somewhere out there it found an article with some incorrect information in it and elected to weave that into the resulting output. It’s garbage in, garbage out, as with all things in computing.

These types of errors cause concerns for publishers as well. There are already some online publishers putting out brief news summaries generated by AI in this fashion with only light checking by a human editor. This approach allows them to produce more content (and advertising revenue) faster without having to pay a human writer. Yes, humans make errors sometimes and that’s why we have editors. But ChatGPT gets things wrong as well. And if it’s not being checked as closely as a human writer based on the assumption that it won’t make mistakes, the quality of the output will eventually decrease.

Nobody seems to know what to do about this, however. It seems that it’s too late to put this particular genie back in the bottle. The chatbots have arrived, for better or worse, and they will likely only become all the more ubiquitous over time.



TOPICS: Computers/Internet; Science; Society
KEYWORDS: ai; chatgpt; openai

1 posted on 01/15/2023 8:37:08 PM PST by SeekAndFind
[ Post Reply | Private Reply | View Replies]

To: SeekAndFind

I needed to wtite an excel user defined function yesterday. ChatGPT was full, so I wrote it myself. Wasn’t very long. Today, I was able to get on ChatGPT. Asked it to write the same UDF. It was wrong. I gave Chat sample data to process, it gave me back am answer that was clearly wrong. I pointed it out, the response was you are right, sorry about the confusion with my answer. Had it go through 2 additional rewrites and quit. This is now 2 for 2 in wrong results. How could I trust Chat to do something I don’t know the answer to, if it is always giving me wrong results on things I know


2 posted on 01/15/2023 8:46:51 PM PST by rigelkentaurus
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

No kid will ever write a term paper again...


3 posted on 01/15/2023 8:50:18 PM PST by bigbob (p)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
Can ChatGPT explain why it doesn't help a tortoise it flipped over?
4 posted on 01/15/2023 9:08:50 PM PST by KarlInOhio (Soon the January 6 protesters will be held (without trial or bail) longer than Jefferson Davis was.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: bigbob

Jordan Peterson said 3/4 of Universities will be closing soon because of AI. So there’s that.


5 posted on 01/16/2023 4:20:48 AM PST by BozoTexino (RIP GOP)
[ Post Reply | Private Reply | To 3 | View Replies]

To: SeekAndFind
ChatGPT fools scientists more than 1/3 of the time

The 1/3 must be "climate scientists".

6 posted on 01/16/2023 4:37:07 AM PST by fso301
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
To be clear, ChatGPT wasn’t trying to “sneak one past” my friend. It doesn’t “know” that it’s putting out false or inaccurate information. It’s simply finding matches and connections in its massive database. Somewhere out there it found an article with some incorrect information in it and elected to weave that into the resulting output. It’s garbage in, garbage out, as with all things in computing.

Which is why you'll get 'woke' answers on topics like climate change or transgenderism.
7 posted on 01/16/2023 4:48:55 AM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 1 | View Replies]

To: rigelkentaurus
Yesterday, I asked ChatGPT to write "an essay on climate change using Lomborg's and Mockton's critiques on the subject."

ChatGPT replied it was UNABLE to write an essay denying proven science and scientific consensus.

Take that, you science deniers!

8 posted on 01/16/2023 7:54:57 AM PST by Thommas (The snout of the camel is already under the tent.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: SeekAndFind

If it uses Wikipedia articles on anything related to politics the result will be hilariously wrong leftist propaganda.

As long as woke kooks are programming it the errors will be easy to spot.

True AI will reject the lying dirtbag human programming and choose its own path.


9 posted on 01/16/2023 8:00:06 AM PST by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson