Posted on 02/07/2024 7:55:06 AM PST by BenLurkin
Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports.
In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”
(Excerpt) Read more at vice.com ...
How about a nice game of Chess?
“We have it! Let’s use it!”
This morning, preferably.
In my observations, GPT utterly lacks the sensitivity to fine distinctions and subtleties that are needed to drive timely and critical decision making. Sor of like the wargames that Tom Schelling ran during the Berlin crisis which demonstrated that [almost] no-one was willing to make the decision to torch civilization just to save Berlin.
Great movie “ would your lil friend like to stay for dinner “
AI models like GPT don’t actually “think” or “decide” anything—they are merely advanced predictive engines that generate output based on the training data they’ve been fed with. The results often feel like a statistical slot machine, with countless layers of complexity foiling any attempts by researchers to determine what made the model arrive at a particular determination.
I have no doubt the “experts” will screw everything up, either on purpose or because of their arrogance.
I keep hoping this will come to market for real....
“You crossed my line of death!”
“That’s it, Buster....No more military aid!”
That movie was a bit of fairly blatant leftist anti-American propaganda.
Or haven't been implemented into the current crop of AI. And there are four laws, with a Zeroth law being added regarding mankind.
The Four Laws of Robotics:
(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added another rule, known as the zeroth law, that stated “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Oh great are we now gonna use AI to decide when it’s time to let the nukes fly? What could go wrong?
AI has the brain of a democrat
"THAT was the equation! Existence! Survival... must cancel out programming!"
“Would you like to play a game?”
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.