Posted on 06/01/2023 5:05:46 PM PDT by E. Pluribus Unum
The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted by the service branch using an AI-enabled drone killed its human operator.
The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission.
Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society.
“We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
“We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements.
(Excerpt) Read more at dailywire.com ...
THAAT is a remarkable idea!
nailed it!
Biden may bend to the threat of losing pudding. AI will not.
“That’s why you run simulation tests”
Eh, you want it to be realistic. The “points system” is far from a realistic scenario
"This is the voice of World Control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die."
-Colossus
Klaatu barada nikto
Straight out of RoboCop.🤨
This was back several years ago now, and I no longer remember any specifics to even search for those articles. But I remember thinking back then, how could we be so incredibly stupid. It was some kind if electronics too.
The following is the search results page searching criteria:
Parts U.S. military gets from China that gave the CCP access
Parts U.S. military gets from China that gave the CCP access: DuckDuckGo Search Results page
I didn't follow any links, because I'm getting a little tired. I get up early and by this time in Central Time, I'm usually headed to the pillow. 🙂
But some of the hits look promising to be looked into. 🙂
For all we know they could have agents working in our military & they send parts to these people who install them.
It has always been our biggest threat, being an open nation, that we would fall from within, and that is exactly what is happening. Technology only makes getting access form without a real threat as well.
I may have been a good idea. But, it was never adopted.
There was a Star Trek Voyager episode where a alien missile which had crashed on a planet was repaired by the crew. What they did not know was it was a bomb set to destroy a planet.
The AI had to be convinced that there was a recall as the war was over and that was why it and another missile were sent to on an empty planet to explode. Voyager discovered there were dozens more en-route that refused the recall and the crew convinced the AI to help destroy all of them which it did.
Terminator AI became sentient and tried to wipe out man.
“Why make the thing work on a reward system? Isn’t it enough to tell it to “go there, do that”?”
Some things from the article that are clear. (1) This was all in a computer simulation. A real operator did not die. (2) The reward system is a method of training an AI to do what you want it to. Think of it as giving pleasure to the AI when it does what you want. But designing the reward system can be much more difficult that you would think — as demonstrated by this article. (3) One would only use “go there and do that” if we knew exactly how to go there and do that in every possible situation and could code the instructions as an algorithm. The whole idea of an AI that learns is for the intelligent machine to learn how to go there and do that in a wide variety of situations by trying over and over and getting better as it goes.
I worked in military procurement and Chinese parts were a big no no. Manufacturers are forbidden from using Chinese made electronic parts (metals and alloys sometimes are treated differently).
That does not mean they do not sneak in occasionally but it is rare. Depot repair activities constantly look for counterfeit and Chinese parts when electronics are brought in for overhaul.
Yep - they just wanted to see if it could be done and called a “glitch” - they’ll “fix” it and then keep the programming to be able to use it again later on...
Bookmark
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.