Posted on 09/30/2021 9:06:52 AM PDT by TigerLikesRoosterNew
Researchers Warn Of ‘Dangerous’ Artificial Intelligence-Generated Disinformation At Scale
"We just needed a much smaller dataset, half hour of training time, and all of a sudden, GPT was now a New York Times writer," said Andrew Lohn, senior research fellow at CSET.
By BRAD D. WILLIAMS
on September 30, 2021 at 4:45 AM
WASHINGTON: Researchers at Georgetown University’s Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale.
The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT’s text-generation capabilities are characterized by CSET researchers as “autocomplete on steroids.”
“We don’t often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you’re starting with to get it to write all sorts of things,” Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings. Lohn said these technologies can be prompted with a short cue to automatically and autonomously write everything from tweets and translations to news articles and novels.
(Excerpt) Read more at breakingdefense.com ...
Good for leveling the playing field.
Lol, 4chan’s already on the job
What I would like to see is something like the reverse of this. I would like AI to be able to scan a piece of writing and retain only statements which “are” or “might be” truthful. And perhaps make logical inferences from such statements, to fill in some blanks.
I think a lot of “journalism” is basically “Trump is worse than Hitler” and “the debunked report will become available later this week” and “Biden says the debt ceiling must be raised to pay for the 3.5 trillion budget which doesn’t cost anything”.
Strip away blatantly false statements, and what is left? And what could be inferred from what is left? And could we build a database of statements which are officially judged to be “honest”?
THEN I’d like to an AI write a news story that could trusted. Because I don’t know if humans have that capability any more.
This has been done at scale now for a while. Hilarity has ensued on several cases where a bot became rabidly racist, anti-semitic, and even homophobic after being “programmed” by the public Internet. The utopia about which liberals dream is a pipe dream. The real world is vicious, cruel, and cutthroat.
As one forgotten Freeper pointed out some time ago ... artificial intelligence sounds great until you ask yourself what artificial stupidity looks like.
In a program you suggest, there should be a more precise preparation of test database full of paragraphs with truthful, bogus, and irrelevant phrases marked by humans. Some assistant software can be written which can accelerate the process. Still, I think it is a tougher task. If it turns out that the task can be achieved with a lot smaller test data than I expected, that would be great.
Definitely a tough challenge.
I do dream of a day when a computer can help us identify the truth. Imagine running “The Communist Manifesto” through the tool — “On page 79, Marx says that Paris is in France. That is the only true statement in this book.” LOL.
Why should 4chans have all the fun? If you can create a program to screw up AI’s at google, fascistbook, etc, I hope you share it with FR so more of us can enjoy and help to re-program the AI bots.
I heard about that. This program seems a lot more sophisticated. Essentially, a human user provides cues in the form of initial phrases carrying his intention, then the program fills the rest automatically. A bigger gun to aim at them. Hopefully, the cost to winnowing them out from their system could become too steep to bear.
Big tech won’t let that happen. Remember that AI is programmed by humans. A sufficient number of leftists could easily move the algorithm to favor their side. It’s a big part of ethical AI standards discussed by big tech giants like Microsoft.
Humans are not just intelligence. Humans are mainly emotions, sometimes tempered by reason.
I’d really like to see someone write a program that reproduces faithfully the interaction of emotion with the rational mind, and that finds the right balance between the two, so one does not dominate excessively over the other.
A machine cannot possess emotion. It can only simulate it. So my wish will never be fulfilled.
Google has implemented countermeasures to neutralize attempts to game their system: website tricks to boost their ranking in search results or artificially inflating view count for a Youtube post.
Their system could become unwieldy if they try to tackle increasing number of "undesirable" events. It is an arms race.
At some point, their only recourse is to make it a felony offense: probably on par with armed burglary?
A Vulcan ya’ mean?
Is this a Qanon dream come true? LOL
I think somehow ai will be involved in the world falling under the spell of the great delusion the Bible speaks of. I know God doesn’t need to use man made stuff to send the great delusion, but he might choose to do so.
The zntichrist will perform ‘great miracles’ to ‘authenticate his position of world leaders, and some have speculated that he will use massive technologies to deceive the world into thinking he is pulling off true miracles.
Who knows.
They’re automating CNN?
It all began with a lie from Satan to Eve. Before the judgment begins, the world will be flooded with lies.
Jesus is Truth.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.