1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
and also —
0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
I saw a show where a man told a robot about the main rule. The robot replied “there’s an app for that” and killed him.
Great idea until the decision algorithms decide that they know best what is right for humanity to keep us from our self-destructive ways. Or at least what it has evaluated as sub-optimal and potentially self-destructive. Then we get VIKI from "I Robot" or we get Skynet from "Terminator" that is afraid we'll take it down with us...or in the precursor form, we get leftist liberals who are sure they know best for us "bitter clingers" and want to rule every aspect of our lives.
Adding that "law" would negate the other three. "Overpopulation harms humanity (cf. Georgia Guidestones), ergo let's exterminate 90% of humans."
Wasn’t there a movie, somewhat along these lines?