Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Algorithms' Dark Side: Embedding Bias into Code
All Analytics ^ | 2/22/2017 | Ariella Brown

Posted on 02/22/2017 4:38:20 PM PST by spintreebob

Does the shift toward more data and algorithmic direction for our business decisions assure us that organizations and businesses are operating to everyone's advantage? There are a number of issues involved that some people feel need to be addressed going forward.

Numbers don't lie, or do they? Perhaps the fact that they are perceived to be absolutely objective is what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome.

That's the argument Cathy O'Neil makes in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. While we tend to think of big data as a counterforce to biased, just decisions, O'Neil finds that in practice, they can reinforce biases even while claiming unassailable objectivity.

"The models being used today are opaque, unregulated, and incontestable, even when they're wrong. The math destruction posed by algorithms is the result of models that reinforce barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc.

Now the organizations and businesses that make those decisions can point to the authority of the algorithm and so shut down any possible discussion that question the decision. In that way, big data can be misused to increase inequality. As algorithms are not created in a vacuum but are born of minds operating in a human context that already has some set assumptions, they actually can extend the reach of human biases rather than counteract them.

"Even algorithms have parents, and those parents are computer programmers, with their values and assumptions," Alberto Ibargüen, president and CEO and of the John S. and James L. Knight Foundation wrote in a blog post. ". . . As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more."

The foundation's VP of Technology Innovation, John Bracken, told me about the foundation's partnership with the MIT Media Lab and the Berkman Klein Center for Internet & Society as well as its work with other individuals and organizations to create a $27 million fund for research in this area. The idea is to open the way to "bridging" together "people across fields and nations" to pull together a range of experiences and perspectives on the social impact of the development of artificial intelligence. AI will impact every aspect of human life, so it is important to think about sharpening policies for the tools to be built and how they will be implemented.

The fund, which is to be open for applicants even outside the founding university partners, may be used for exploring a number of issues identified including these:

-Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?

-Advancing accountable and fair AI: What kinds of controls do we need to minimize AI's potential harm to society and maximize its benefits?

-Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?

Independently of the organizations involved in the fund, the Association for Computing Machinery US Public Policy Council (USACM), has been doing its own research into the issues that arise in a world in which crucial decisions may be determined by algorithms. It recently released its take on what businesses should use for guidance in its its Principles for Algorithmic Transparency and Accountability.

The seven principles listed include awareness of "the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society;" the possibility of an audit of the data, algorithms, and models that were involved in a decision that may have been harmful; the possibility for "redress for individuals and groups that are adversely affected by algorithmically informed decisions;" as well as the obligation for the organization to provide explanations of their processes and accountability for their validity.

As we go forward in incorporating even more algorithms into the daily functions of business and other organizations, we will have to be mindful about the potential impact of decisions that may not be as objective as we assumed them to be. Better data doesn't automatically translate into better results, and we have to be aware of potential problems if we are to address them.


TOPICS: Business/Economy; Culture/Society; Extended News; Philosophy
KEYWORDS: bias; fakenews; predictiveanalytics; statistics
In this article the left realizes its models of predictive analytics are not infallible. Yet elections, global warming, social justice depend on these models.

What bias that should be built in to Predictive Analytic? When is bias good? When is it inappropriate?

1 posted on 02/22/2017 4:38:20 PM PST by spintreebob
[ Post Reply | Private Reply | View Replies]

To: HiTech RedNeck

Any comment?


2 posted on 02/22/2017 4:40:53 PM PST by spintreebob
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob
Better data doesn't automatically translate into better results,...

i.e. they want to figure out how the training sets can be systematically gamed to ensure desired results. That capability doesn't yet exist, and my guess is that it won't ever. Can't blame them for trying though.

I'm kind of a dillitante when it comes to data science, listen to a couple of the popular podcasts, go to some of the data science conferences and meet up groups, sit in on some of the presentations given out by leading professors, etc. But... definitely not an expert. However, the field definitely is left leaning - they want to use data to promote "public good" - their version of good, not necessarily actually good in the classical sense.

3 posted on 02/22/2017 4:48:57 PM PST by glorgau
[ Post Reply | Private Reply | To 2 | View Replies]

To: glorgau

Classic technocrat religion. The best and the brightest should make the important decisions for those less intellectually endowed.

But as humans the best and the brightest make terrible mistakes that impact millions.

The left blames the mistakes on lack of sufficient data and sufficient analysis of the data. Enter Artificial Intelligence, Predictive Analytics and new technology to be used by the best and the brightest to now, finally make the best decisions for the rest of us.


4 posted on 02/22/2017 4:54:57 PM PST by spintreebob
[ Post Reply | Private Reply | To 3 | View Replies]

To: spintreebob

Mathematics doesn’t lie. An unbiased program (not algorithm) will faithfully reveal the truth. In today’s pc-obsessed world, the truth explodes heads.

What the a$$holes want is programs with deliberate bias, so the numbers come out “right”.

Remember: math scores that unfailingly show that Chinese and Japanese, among others, beat Whites.


5 posted on 02/22/2017 4:57:05 PM PST by I want the USA back (Lying Media: completely irresponsible. Complicit in the destruction of this country.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob

All too often, people, even scientists, confuse accuracy and precision. You can calculate results to ever-increasing precision with bigger computers and more data, but if the underlying algorithm is wrong, it comes with no increase of accuracy.

The Globull Warmists live this flaw in spades, for example.


6 posted on 02/22/2017 5:02:32 PM PST by XEHRpa
[ Post Reply | Private Reply | To 2 | View Replies]

To: I want the USA back
What the a$$holes want is programs with deliberate bias...

The "Hockey Stick", for example.

7 posted on 02/22/2017 5:03:27 PM PST by DuncanWaring (The Lord uses the good ones; the bad ones use the Lord.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: I want the USA back

“An unbiased program (not algorithm) will faithfully reveal the truth.”

The problem I see is the huge-ness of today’s computer programs. Even “new” program undoubtedly like a little Visual Basic program running inside Excel runs on a foundation of code that is unfathomably large.

Thus practically all programs contain unknown code. Thus we can’t know whether they are doing what we, in our “new” code, think they are doing.


8 posted on 02/22/2017 5:21:36 PM PST by cymbeline
[ Post Reply | Private Reply | To 5 | View Replies]

To: spintreebob

eTyrant


9 posted on 02/22/2017 5:26:45 PM PST by bar sin·is·ter
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob

what a bunch of gibberish and BS!

“what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome”

Yes, what indeed? Is there a better reason that we are equippent with brains capable of logical analysis and reasoning?

Best I can figure it, the author is afraid that if algos are written that bias the outcome and we don’t pay any attention, we’ll be misled. Welcome to the polling industry, 2016 edition. Or global warming 10 years prior.


10 posted on 02/22/2017 5:38:34 PM PST by bigbob (We have better coverage than Verizon - Can You Hear Us Now?)
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob

I activated a Google alert for Trump. The results sent to me just today were 90%+ heavily anti-Trump stories from the likes of CNN, NYT, etc


11 posted on 02/22/2017 5:40:03 PM PST by stocksthatgoup (There will come a time when those screaming Fascists are in fact the actual Facists. W Churchill)
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob

One of the seven deadly stastical sins almost all people make is placing statistical importance above practical importance.


12 posted on 02/22/2017 6:04:28 PM PST by jurroppi1 (The Left doesnÂ’t have ideas, it has cliches. H/T Flick Lives)
[ Post Reply | Private Reply | To 2 | View Replies]

To: glorgau

No, they take the results they want and make the data fit their model.


13 posted on 02/22/2017 6:49:30 PM PST by Foolsgold (Those who are too smart to engage in politics are punished by being governed by those who are dumber)
[ Post Reply | Private Reply | To 3 | View Replies]

To: spintreebob
I was wondering when progressives would start the effort to ban analytics.

The problem with analytics from the progressive view is not that the programmers were biased. It is that many times the objective algorithms and data violate progressive's enforced assumptions about the world.

The point of analytics is to have software look for patterns in data. Patterns the programmers were not aware of. Sounds innocent, but when people do that to other people, isn't that profiling, or racism, or any of the other "isms"?

Here is an example. One of the first users of data analytics was grocery stores. When they began those loyalty cards, they could use people's history to learn that people who bought ketchup often bought mustard at the same time. So a smart grocery store would locate ketchup and mustard together or even package them together.

So far this is innocent. But how about if the software found that people who bought fried chicken also bought watermelon? And better yet, if the software knew the customers were black? Should you locate the watermelon next to the chicken? Should the store send these customers coupons for both? Or should you pretend not to see a racial stereotype despite the fact that objective algorithms found a real pattern?

Now you have a progressive quandary. The article seems to want progressive filters in the software in order to become blind to patterns which violate progressive dogma. The actal bias is in the filters progressives want, not in the analytics software, and not in the programmers of the analytics software.

14 posted on 02/22/2017 7:36:00 PM PST by Vince Ferrer
[ Post Reply | Private Reply | To 1 | View Replies]

To: spintreebob

While these tools should be used, basing all decisions on what some computer algorithm says is really stupid.


15 posted on 02/24/2017 12:41:01 PM PST by Impy (End the kritarchy!)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson