Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Researchers Warn Of ‘Dangerous’ Artificial Intelligence-Generated Disinformation At Scale
Breaking Defense ^ | September 30, 2021 | BRAD D. WILLIAMS

Posted on 09/30/2021 9:06:52 AM PDT by TigerLikesRoosterNew

Researchers Warn Of ‘Dangerous’ Artificial Intelligence-Generated Disinformation At Scale

"We just needed a much smaller dataset, half hour of training time, and all of a sudden, GPT was now a New York Times writer," said Andrew Lohn, senior research fellow at CSET.

By BRAD D. WILLIAMS

on September 30, 2021 at 4:45 AM

WASHINGTON: Researchers at Georgetown University’s Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale.

The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT’s text-generation capabilities are characterized by CSET researchers as “autocomplete on steroids.”

“We don’t often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you’re starting with to get it to write all sorts of things,” Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings. Lohn said these technologies can be prompted with a short cue to automatically and autonomously write everything from tweets and translations to news articles and novels.

(Excerpt) Read more at breakingdefense.com ...


TOPICS: News/Current Events
KEYWORDS: ai; disinformation; gpt; openai
Good. I will pay a visit to OpenAI site. I am not sure if GPT is publicly available. If it is, I will download it. Tired of guarding against disinfo and indoctrination. It woule be fun to flood Big Tech network with decent-looking garbage posts. With many using the program, we could even indoctrinate their AI machine to be anti-PC.

Good for leveling the playing field.

1 posted on 09/30/2021 9:06:52 AM PDT by TigerLikesRoosterNew
[ Post Reply | Private Reply | View Replies]

To: TigerLikesRoosterNew

Lol, 4chan’s already on the job


2 posted on 09/30/2021 9:13:17 AM PDT by struggle
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

What I would like to see is something like the reverse of this. I would like AI to be able to scan a piece of writing and retain only statements which “are” or “might be” truthful. And perhaps make logical inferences from such statements, to fill in some blanks.

I think a lot of “journalism” is basically “Trump is worse than Hitler” and “the debunked report will become available later this week” and “Biden says the debt ceiling must be raised to pay for the 3.5 trillion budget which doesn’t cost anything”.

Strip away blatantly false statements, and what is left? And what could be inferred from what is left? And could we build a database of statements which are officially judged to be “honest”?

THEN I’d like to an AI write a news story that could trusted. Because I don’t know if humans have that capability any more.


3 posted on 09/30/2021 9:15:52 AM PDT by ClearCase_guy (China is like the Third Reich. We are Mussolini's Italy. A weaker, Jr partner, good at losing wars.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

This has been done at scale now for a while. Hilarity has ensued on several cases where a bot became rabidly racist, anti-semitic, and even homophobic after being “programmed” by the public Internet. The utopia about which liberals dream is a pipe dream. The real world is vicious, cruel, and cutthroat.


4 posted on 09/30/2021 9:22:00 AM PDT by rarestia (Repeal the 17th Amendment and ratify Article the First to give the power back to the people!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

As one forgotten Freeper pointed out some time ago ... artificial intelligence sounds great until you ask yourself what artificial stupidity looks like.


5 posted on 09/30/2021 9:25:21 AM PDT by Alberta's Child ("All lies and jest, ‘til a man hears what he wants to hear and disregards the rest.")
[ Post Reply | Private Reply | To 1 | View Replies]

To: ClearCase_guy
I think that is a tougher task to do. GPT, as I understand it, basically generates sophisticated blabbering while staying within certain parameters, which is human user's general intention reflected in initial phrases he provides.

In a program you suggest, there should be a more precise preparation of test database full of paragraphs with truthful, bogus, and irrelevant phrases marked by humans. Some assistant software can be written which can accelerate the process. Still, I think it is a tougher task. If it turns out that the task can be achieved with a lot smaller test data than I expected, that would be great.

6 posted on 09/30/2021 9:30:16 AM PDT by TigerLikesRoosterNew
[ Post Reply | Private Reply | To 3 | View Replies]

To: TigerLikesRoosterNew

Definitely a tough challenge.

I do dream of a day when a computer can help us identify the truth. Imagine running “The Communist Manifesto” through the tool — “On page 79, Marx says that Paris is in France. That is the only true statement in this book.” LOL.


7 posted on 09/30/2021 9:33:48 AM PDT by ClearCase_guy (China is like the Third Reich. We are Mussolini's Italy. A weaker, Jr partner, good at losing wars.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: TigerLikesRoosterNew

Why should 4chans have all the fun? If you can create a program to screw up AI’s at google, fascistbook, etc, I hope you share it with FR so more of us can enjoy and help to re-program the AI bots.


8 posted on 09/30/2021 9:33:49 AM PDT by rigelkentaurus
[ Post Reply | Private Reply | To 1 | View Replies]

To: rarestia

I heard about that. This program seems a lot more sophisticated. Essentially, a human user provides cues in the form of initial phrases carrying his intention, then the program fills the rest automatically. A bigger gun to aim at them. Hopefully, the cost to winnowing them out from their system could become too steep to bear.


9 posted on 09/30/2021 9:37:41 AM PDT by TigerLikesRoosterNew
[ Post Reply | Private Reply | To 4 | View Replies]

To: TigerLikesRoosterNew

Big tech won’t let that happen. Remember that AI is programmed by humans. A sufficient number of leftists could easily move the algorithm to favor their side. It’s a big part of ethical AI standards discussed by big tech giants like Microsoft.


10 posted on 09/30/2021 9:39:19 AM PDT by rarestia (Repeal the 17th Amendment and ratify Article the First to give the power back to the people!)
[ Post Reply | Private Reply | To 9 | View Replies]

To: TigerLikesRoosterNew

Humans are not just intelligence. Humans are mainly emotions, sometimes tempered by reason.

I’d really like to see someone write a program that reproduces faithfully the interaction of emotion with the rational mind, and that finds the right balance between the two, so one does not dominate excessively over the other.

A machine cannot possess emotion. It can only simulate it. So my wish will never be fulfilled.


11 posted on 09/30/2021 9:50:03 AM PDT by I want the USA back (Dethrone the ruling elite. Redistribute their wealth. Take away their power. Annul their privileges.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: rarestia
The world is full of "unethical" people. They want to swat as many gadflies as possible. But there could be more gadflies coming after them.

Google has implemented countermeasures to neutralize attempts to game their system: website tricks to boost their ranking in search results or artificially inflating view count for a Youtube post.

Their system could become unwieldy if they try to tackle increasing number of "undesirable" events. It is an arms race.

At some point, their only recourse is to make it a felony offense: probably on par with armed burglary?

12 posted on 09/30/2021 9:56:47 AM PDT by TigerLikesRoosterNew
[ Post Reply | Private Reply | To 10 | View Replies]

To: I want the USA back

A Vulcan ya’ mean?


13 posted on 09/30/2021 10:27:02 AM PDT by ex91B10 (Just because you can doesn't mean you should. )
[ Post Reply | Private Reply | To 11 | View Replies]

To: TigerLikesRoosterNew

Is this a Qanon dream come true? LOL


14 posted on 09/30/2021 10:30:42 AM PDT by MHGinTN (A dispensation perspective is a powerful tool for discernment)
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

I think somehow ai will be involved in the world falling under the spell of the great delusion the Bible speaks of. I know God doesn’t need to use man made stuff to send the great delusion, but he might choose to do so.

The zntichrist will perform ‘great miracles’ to ‘authenticate his position of world leaders, and some have speculated that he will use massive technologies to deceive the world into thinking he is pulling off true miracles.

Who knows.


15 posted on 09/30/2021 10:44:29 AM PDT by Bob434
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

They’re automating CNN?


16 posted on 09/30/2021 10:53:17 AM PDT by sphinx
[ Post Reply | Private Reply | To 1 | View Replies]

To: TigerLikesRoosterNew

It all began with a lie from Satan to Eve. Before the judgment begins, the world will be flooded with lies.

Jesus is Truth.


17 posted on 09/30/2021 2:52:45 PM PDT by aimhigh (THIS is His commandment . . . . 1 John 3:23)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson