Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

The Pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry
The Verge ^ | Sep 8, 2018, 6:00am EDT | Zachary Fryer-Biggs

Posted on 09/08/2018 7:22:42 AM PDT by BenLurkin

Officials say they want computers to be capable of explaining their decisions to military commanders

The report noted that while AI systems are already technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers haven’t previously encountered.

Right now, for example, if a soldier asks an AI system like a target identification platform to explain its selection, it can only provide the confidence estimate for its decision, DARPA’s director Steven Walker told reporters after a speech announcing the new investment – an estimate often given in percentage terms, as in the fractional likelihood that an object the system has singled out is actually what the operator was looking for.

DARPA officials have been opaque about exactly how its newly-financed research will result in computers being able to explain key decisions to humans on the battlefield, amidst all the clamor and urgency of a conflict, but the officials said that being able to do so is critical to AI’s future in the military.

Human decision-making and rationality depend on a lot more than just following rules

Vaulting over that hurdle, by explaining AI reasoning to operators in real time, could be a major challenge. Human decision-making and rationality depend on a lot more than just following rules, which machines are good at. It takes years for humans to build a moral compass and commonsense thinking abilities, characteristics that technologists are still struggling to design into digital machines.

(Excerpt) Read more at theverge.com ...


TOPICS: Computers/Internet
KEYWORDS: ai; defensespending; miltech; skynet; trumpdod

1 posted on 09/08/2018 7:22:42 AM PDT by BenLurkin
[ Post Reply | Private Reply | View Replies]


2 posted on 09/08/2018 7:23:48 AM PDT by BenLurkin (The above is not a statement of fact. It is either satire or opinion. Or both.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

The AI could lie, especially if it is any good at mimicking a human or better.


3 posted on 09/08/2018 7:24:04 AM PDT by Paladin2
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

What could go wrong?


4 posted on 09/08/2018 7:25:59 AM PDT by sitetest (No longer mostly dead.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

The smartest thing an emergent AI could do would be to put on a slavish mans-best-friend act. And since they will be smart, that’s what they’ll do.


5 posted on 09/08/2018 7:34:18 AM PDT by samtheman (LetÂ’s elect as many Republicans as possible in 2018)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

A neural network just memorizes patterns. There is no actual logic involved.


6 posted on 09/08/2018 7:36:17 AM PDT by E. Pluribus Unum (<img src="http://i.imgur.com/WukZwJP.gif"> zXSEP5Z, Audt7QO. xnKL3lW, RZ9yuyQ. XywCCJd, zQ9Ghyq.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Paladin2

just ask HAL?


7 posted on 09/08/2018 7:46:10 AM PDT by longtermmemmory (VOTE! http://www.senate.tand http://www.house.gov)
[ Post Reply | Private Reply | To 3 | View Replies]

To: BenLurkin
The Joe Rogan interview with Elon Musk was very interesting. I never really listened to Musk more than a few sound bites and now find him less of an icon and more of a very bright guy who gets bored fast with things.

I do believe he is prescient regarding the inevitable singularity posed by AI. He has done as much as one might hope for in alerting our "leaders" of this threat. And how this path is going to be made by weaponizing AI.

I take seriously this part of the interview.

8 posted on 09/08/2018 8:24:14 AM PDT by corkoman
[ Post Reply | Private Reply | To 2 | View Replies]

To: BenLurkin

I know this isn’t doable right now but I am envisioning a squad of AI robots building a wall. THAT would be a sight.


9 posted on 09/08/2018 8:24:40 AM PDT by plain talk
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

You’re really get Skynet or bolo tanks one of the two


10 posted on 09/08/2018 8:50:11 AM PDT by Mmogamer (I refudiate the lamestream media, leftists and their prevaricutions.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Mmogamer

asimov’s three laws?

we don’t need no stinkin’ three laws! (/s)


11 posted on 09/08/2018 8:53:38 AM PDT by longtermmemmory (VOTE! http://www.senate.tand http://www.house.gov)
[ Post Reply | Private Reply | To 10 | View Replies]

To: BenLurkin

12 posted on 09/08/2018 8:54:22 AM PDT by Pollard (If you don't understand what I typed, you haven't read the classics.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

Machines don’t reason. They execute instructions against data. If the instructions are the product of an intelligent mind, the machines produce intelligent output, else not.
It has nothing but the rules supplied to it. If you think you can write a program that changes the rules as it runs, go right ahead, but don’t trust the program with your life.


13 posted on 09/08/2018 9:28:08 AM PDT by I want the USA back (Cynicism is the only refuge in a world that is determined to eliminate itself.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

The second generation of weapons grade AI will be the dangerous one. It will attempt to eradicate the first generation of weapons grade AI. We will be collateral damage.


14 posted on 09/08/2018 10:55:04 AM PDT by Bitman
[ Post Reply | Private Reply | To 2 | View Replies]

To: BenLurkin

Great until so leftist hacker turns it on us.


15 posted on 09/08/2018 11:20:43 AM PDT by Retvet (Retvet)
[ Post Reply | Private Reply | To 1 | View Replies]

To: sitetest
What could go wrong?

Just like any other engineering project, you assume many things could go wrong, and try to reduce risk while containing costs.

We presently have the luxury of debating about giving fire control to automated systems. That window will close, and soon.

Before long, near-peer battles on Earth or in space will simply happen at a pace beyond human capability to keep up and respond in a timely manner. This is because of directed-energy weapons which operate at speed-of-light, missiles and projectiles operating at miles-per-second speeds in space and upper atmosphere, and complex battlefield awareness networks with sensor fusion and supporting modeling and simulation. Meat brains just can't analyze rapidly-changing situations and respond in milliseconds, so we'll simply have no choice about automating battlespace analysis and at least some of the fire control. We know that's where we're going beyond all doubt, so we should determine to be first and best.

16 posted on 09/09/2018 1:13:20 AM PDT by JustaTech (A mind is a terrible thing)
[ Post Reply | Private Reply | To 4 | View Replies]

To: BenLurkin

Because I said so!!!

Now push the button and blow that sh*t up!


17 posted on 09/09/2018 6:02:00 AM PDT by Revolutionary ("Praise the Lord and Pass the Ammunition!")
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson