Skip to comments.The Pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry
Posted on 09/08/2018 7:22:42 AM PDT by BenLurkin
Officials say they want computers to be capable of explaining their decisions to military commanders
The report noted that while AI systems are already technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers havent previously encountered.
Right now, for example, if a soldier asks an AI system like a target identification platform to explain its selection, it can only provide the confidence estimate for its decision, DARPAs director Steven Walker told reporters after a speech announcing the new investment an estimate often given in percentage terms, as in the fractional likelihood that an object the system has singled out is actually what the operator was looking for.
DARPA officials have been opaque about exactly how its newly-financed research will result in computers being able to explain key decisions to humans on the battlefield, amidst all the clamor and urgency of a conflict, but the officials said that being able to do so is critical to AIs future in the military.
Human decision-making and rationality depend on a lot more than just following rules
Vaulting over that hurdle, by explaining AI reasoning to operators in real time, could be a major challenge. Human decision-making and rationality depend on a lot more than just following rules, which machines are good at. It takes years for humans to build a moral compass and commonsense thinking abilities, characteristics that technologists are still struggling to design into digital machines.
(Excerpt) Read more at theverge.com ...
The AI could lie, especially if it is any good at mimicking a human or better.
What could go wrong?
The smartest thing an emergent AI could do would be to put on a slavish mans-best-friend act. And since they will be smart, thats what theyll do.
A neural network just memorizes patterns. There is no actual logic involved.
just ask HAL?
I do believe he is prescient regarding the inevitable singularity posed by AI. He has done as much as one might hope for in alerting our "leaders" of this threat. And how this path is going to be made by weaponizing AI.
I take seriously this part of the interview.
I know this isn’t doable right now but I am envisioning a squad of AI robots building a wall. THAT would be a sight.
You’re really get Skynet or bolo tanks one of the two
asimov’s three laws?
we don’t need no stinkin’ three laws! (/s)
Machines don’t reason. They execute instructions against data. If the instructions are the product of an intelligent mind, the machines produce intelligent output, else not.
It has nothing but the rules supplied to it. If you think you can write a program that changes the rules as it runs, go right ahead, but don’t trust the program with your life.
The second generation of weapons grade AI will be the dangerous one. It will attempt to eradicate the first generation of weapons grade AI. We will be collateral damage.
Great until so leftist hacker turns it on us.
Just like any other engineering project, you assume many things could go wrong, and try to reduce risk while containing costs.
We presently have the luxury of debating about giving fire control to automated systems. That window will close, and soon.
Before long, near-peer battles on Earth or in space will simply happen at a pace beyond human capability to keep up and respond in a timely manner. This is because of directed-energy weapons which operate at speed-of-light, missiles and projectiles operating at miles-per-second speeds in space and upper atmosphere, and complex battlefield awareness networks with sensor fusion and supporting modeling and simulation. Meat brains just can't analyze rapidly-changing situations and respond in milliseconds, so we'll simply have no choice about automating battlespace analysis and at least some of the fire control. We know that's where we're going beyond all doubt, so we should determine to be first and best.
Because I said so!!!
Now push the button and blow that sh*t up!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.