Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: MarchonDC09122009

Carnegie MIT paper link:

https://arxiv.org/pdf/1709.06692.pdf

Excerpt -

“A Voting-Based System for Ethical Decision Making
Ritesh Noothigattu

Machine Learning Dept.
CMU
Snehalkumar ‘Neil’ S. Gaikwad
The Media Lab
MIT
Edmond Awad
The Media Lab
MIT
Sohan Dsouza
The Media Lab
MIT
Iyad Rahwan
The Media Lab
MIT
Pradeep Ravikumar
Machine Learning Dept.
CMU
Ariel D. Procaccia
Computer Science Dept.
CMU
Abstract
We present a general approach to automating ethical deci-
sions, drawing on machine learning and computational social
choice. In a nutshell, we propose to learn a model of societal
preferences, and, when faced with a specific ethical dilemma
at runtime, efficiently aggregate those preferences to identify
a desirable choice. We provide a concrete algorithm that in-
stantiates our approach; some of its crucial steps are informed
by a new theory of swap-dominance efficient voting rules. Fi-
nally, we implement and evaluate a system for ethical deci-
sion making in the autonomous vehicle domain, using prefer-
ence data collected from 1.3 million people through the Moral
Machine website.
1 Introduction
The problem of ethical decision making, which has long
been a grand challenge for AI (Wallach and Allen 2008),
has recently caught the public imagination. Perhaps its best-
known manifestation is a modern variant of the classic trol-
ley problem (Jarvis Thomson 1985): An autonomous vehicle
has a brake failure, leading to an accident with inevitably
tragic consequences; due to the vehicle’s superior percep-
tion and computation capabilities, it can make an informed
decision. Should it stay its course and hit a wall, killing its
three passengers, one of whom is a young girl? Or swerve
and kill a male athlete and his dog, who are crossing the
street on a red light? A notable paper by Bonnefon, Shariff,
and Rahwan (2016) has shed some light on how people ad-
dress such questions, and even former US President Barack
Obama has weighed in.1
Arguably the main obstacle to automating ethical deci-
sions is the lack of a formal specification of ground-truth
ethical principles, which have been the subject of debate
for centuries among ethicists and moral philosophers (Rawls
1971; Williams 1986). In their work on fairness in machine
learning, Dwork et al. (2012) concede that, when ground-
truth ethical principles are not available, we must use an “ap-
proximation as agreed upon by society.” But how can society
agree on the ground truth — or an approximation thereof —
when even ethicists cannot?
We submit that decision making can, in fact, be auto-
mated, even in the absence of such ground-truth principles,
1https://www.wired.com/2016/10/president-obama-
mit-joi-ito-interview/
by aggregating people’s opinions on ethical dilemmas. This
view is foreshadowed by recent position papers by Greene
et al. (2016) and Conitzer et al. (2017), who suggest that
the field of computational social choice (Brandt et al. 2016),
which deals with algorithms for aggregating individual pref-
erences towards collective decisions, may provide tools for
ethical decision making. In particular, Conitzer et al. raise
the possibility of “letting our models of multiple people’s
moral values vote over the relevant alternatives.”
We take these ideas a step further by proposing and im-
plementing a concrete approach for ethical decision making
based on computational social choice, which, we believe, is
quite practical. In addition to serving as a foundation for in-
corporating future ground-truth ethical and legal principles,
it could even provide crucial preliminary guidance on some
of the questions faced by ethicists. Our approach consists of
four steps:
I Data collection: Ask human voters to compare pairs of al-
ternatives (say a few dozen per voter). In the autonomous
vehicle domain, an alternative is determined by a vector
of features such as the number of victims and their gender,
age, health — even species!
II Learning: Use the pairwise comparisons to learn a model
of the preferences of each voter over all possible alterna-
tives.
III Summarization: Combine the individual models into a
single model, which approximately captures the collec-
tive preferences of all voters over all possible alternatives.
IV Aggregation: At runtime, when encountering an ethical
dilemma involving a specific subset of alternatives, use
the summary model to deduce the preferences of all vot-
ers over this particular subset, and apply a voting rule to
aggregate these preferences into a collective decision. In
the autonomous vehicle domain, the selected alternative
is the outcome that society (as represented by the voters
whose preferences were elicited in Step I) views as the
least catastrophic among the grim options the vehicle cur-
rently faces. Note that this step is only applied when all
other options have been exhausted, i.e., all technical ways
of avoiding the dilemma in the first place have failed, and
all legal constraints that may dictate what to do have also
failed.”


2 posted on 11/16/2017 9:09:09 AM PST by MarchonDC09122009 (When is our next march on DC? When have we had enough?)
[ Post Reply | Private Reply | To 1 | View Replies ]


To: MarchonDC09122009

So when do we pick who dies based on their political orientation? Because any system based on any calculation other than maximizing lives (lifespan) saved will end up evaluating the relative worthiness of the individuals on a subjective basis.


13 posted on 11/16/2017 9:22:01 AM PST by calenel (The Democratic Party is a Criminal Enterprise. It is the Socialist Mafia.)
[ Post Reply | Private Reply | To 2 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson