Select Page

By Joe Kay & Toby Lyons

In 2016, applicants from over 100 countries entered a beauty contest judged entirely by Artificial Intelligence (AI). Out of the 44 winners judged by a panel of robots, almost all of them were white.

As AI is increasingly used to replace human decision-making, we should be concerned about the underlying biases that these complex and closely guarded black boxes are informed by.

In the case of the beauty contest, the AI did exactly what it was trained to do; judge beauty based on human perception of what beauty is. The crux of the problem lay at the failure of the programmers to supply the AI with a diverse, unbiased training set to learn from.

AI reflects the biases of its creators and is only as good as the data it learns from; you get out what you put in. Firstly, whoever inputs initial training sets have their own prejudices (or in the case of the contest, their own preconception of beauty), and secondly, machines learn autonomously from human feedback and will imitate the biases that the data contains.

AI is used to drive decisions that can have a serious impact on our lives and the algorithms powering them are used to automate, influence, and even replace human decision making. For example, they are used to analyse who is worthy for a loan application, to help diagnose illness, and even to determine who’s eligible for parole.

There has been mounting concern that as we are in a time where AI and machine learning is spreading to more critical areas of our lives, we need a deeper understanding of this technology and the need for transparency is essential to avoid algorithmic bias. (see Forget Killer Robots, Bias is the Real AI Danger).

This article highlights the need to change the way AI is programmed in order to avoid the negative impact that bias in AI is already having in the world today.

We explore ways that are currently being used to tackle algorithmic bias and add to the discussion by suggesting that Team Intelligence software is a necessary precursor to AI because it can be used to neutralise human biases from affecting training and feedback data.

Incarceration, oppression and AI

Pro Republica’s report on machine bias in predicting criminality is a terrifying insight into how the American criminal justice system increasingly relies on AI to do its thinking. The report shows how AI estimates a defendants likelihood of reoffending at some point in the future.

According to the report, the AI was twice as likely to wrongly label African American defendants as future criminals as they were white defendants. It was also extremely unreliable at predicting future violent crimes.

Yet human judges use this data to inform sentences.

The need for transparency and accountability

The AI is fed with a 137 question report on each defendant, but the methodology it uses has not been disclosed, so we can only speculate as to how it weights answers and uses them to determine the likelihood of each individual to re-offend.

The human reliance on biased data, and the subsequent lack of accountability and transparency is reprehensible when we consider what’s at stake here: the possibility of dangerous criminals slipping through the net, and people unfairly receiving harsher sentences. Biased AI will continue to influence sentencing decisions, put incorrect statistics into public consciousness, and perpetuate racial inequality.

This could well be the thin end of the wedge for what we’re beginning to see in the application of AI in the years to come.

What is being done to tackle bias in AI?

The open letter on Artificial Intelligence, signed by world renowned Thought Leaders like Professor Nick Bostrom, the late Stephen Hawking, and Elon Musk is indicative of the dangers of AI.

Organisations such as Open AI, the Future of Humanity Institute and the AI Now Institute are starting to make people aware of the dangers of bias in AI but there is still the sense that neither governments, nor companies developing AI are interested in addressing the problem (see Biased Algorithms Are Everywhere, and No One Seems to Care).

The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of AI. One of their four core domains is bias and inclusion. Their 2017 report highlighted emerging strategies to address bias, which are highlighted below:

  • The need to promote gender and racial diversity within AI development to address issues of inclusivity within the industry
  • The need to explore accountability and transparency issues surrounding AI and algorithmic systems as a better way to understand and evaluate biases
  • Due to misaligned interests and the information asymmetry that AI exacerbates, there is a need to develop new incentives for fairness and new methods for validating fair practice
  • Part of the fundamental difficulty in defining, understanding and measuring bias stems from the contentious and conceptually difficult task of defining fairness

Team Intelligence

Digital Team Intelligence is an emerging field. It is a new sector of software that neutralises the effects of human bias in teams to help them work more effectively together. Because human bias affects AI training data so negatively, Team Intelligence could be used by AI training teams to develop less biased, “fairer” AI.

Team Intelligence software has been designed with the recognition that cognitive bias is a natural part of human behaviour, which hinders groups of people from making decisions objectively. The technology can be used to enhance or replace in-person meetings and workshops where decisions are traditionally made based on who can put forward the best argument, not on which ideas are best.

In a growing number of early use cases, we are seeing the application of Team Intelligence software being used across a variety of sectors to promote diverse and objective decision making. (See case studies here).

Team Intelligence helps teams to objectively work through the process that most meetings are based around:

Asking Questions – Proposing and Discussing Ideas – Deciding on Actions

Team Intelligence software removes human bias from teams by initially hiding the names of individuals while tracking their activity in the background to improve motivation – which Part 2 of this Blog will discuss.

This temporary use of anonymity acts much like blind auditions or double blind medical trials and removes bias in the following ways:

  • Participants are able to test “crazy ideas” or ask basic questions without worrying about social implications or the pressure of “looking stupid”
  • Individuals are asked to rate the content of others without being unconsciously affected by who has said what
  • Individual ratings are not disclosed to others which removes our tendency to conform to group opinion
  • The Team Intelligence software then aggregates the independent ratings of the team-members to display the results back to the team for objective analysis and decision-making

 

Team Intelligence allows individuals to think collectively but objectively before coming back together to have a focused and deeper discussion about the reasons why they thought what they did.

How Team Intelligence can help remove bias from AI

There are three critical issues relating to bias in the field of AI that have been discussed in this article and that are widely documented elsewhere.

Artificial Intelligence Issue Team Intelligence Solution
Bias in AI mirrors the bias of its creators Team Intelligence software will provide AI with unbiased training data  
Lack of diversity in AI training and development Team Intelligence software promotes diversity of thought and encourages everyone’s voices to be heard
Lack of transparency and accountability in the field of AI Team Intelligence software data-enables decision making. Ideas, contributions, and subsequent decisions that are acted upon are therefore recorded, accountable, and transparent.

Final words:

Given that AI reflects the bias of its creators, it is important to reduce the possibility of bias creeping in at the beginning of new AI projects.

We believe that Digital Team Intelligence will be critical for teams engaged in the development of AI.

You can find out how Enswarm’s Team Intelligence Platform improves the way people think together HERE