<img height="1" width="1" style="display:none;" alt="" src="https://analytics.twitter.com/i/adsct?txn_id=l615x&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0"/> <img height="1" width="1" style="display:none;" alt="" src="https://t.co/i/adsct?txn_id=l615x&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0"/>


Data ethics and prejudice in A.I.

I've been posting about biases recently in the interests of considering how it can help us help people. There's still a lot to consider there and I really like the post from Mike Collins on the subject of Cognitive Bias Click here to skip to the post on DPG Community

In addition to this, I've come across an article which has now got me thinking about the impact of bias on process, systems and machines.

I'm sure we all agree that technology is massive, and growing, in the workplace arena. The Taylor Report states "Technology is changing the way we live and work at a rate not seen since the Industrial Revolution," 

Perhaps the hottest topic in technology is Artificial Intelligence, which brings me to the article I want to share: 'How we transferred our bias into our machines and what we can do about it'. 

A.I. needs to be programmed. That's done by writing algorithms. The algorithms tell the system how to make predictions about what will happen in a situation - and what to do in that situation. Those predictions are based on the past e.g. fire is known to be damaging; fire requires heat; therefore to prevent damage - turn on the sprinklers if fire seems likely. Simple, effective - and welcome.

However, there is a point at which predictions become unhelpful (or unwelcome) prejudice. The podcast accompanying the article gives an example of this prejudice, where the narrator starts to type a text message in his phone about a nurse and the predictive text facility assumes (or predicts) that the nurse is female. Thus 'we' have transferred our bias into our machines.

If a system makes a decision which is later considered discriminatory, who is accountable?

As HR professionals, do we have an obligation to safeguard our organisation's strategy, products and services from unhelpful bias in technology? 

As L&D professionals, how do we achieve a learner centred approach if our system is predicting the student's needs based on the past?

I guess the final question is: had we all better add writing algorithms to our CPD plan?

Click here to read the article 'How we transferred our bias into our machines and what we can do about it' and access the podcast

E-mail me when people leave their comments –

Gary is an Online Learning Consultant working at DPG.

You need to be a member of DPG Community to add comments!

Join DPG Community


  • I guess it also depends on what kind (level) of AI is under consideration. http://searchcio.techtarget.com/definition/AI

    If a machine learns to make decisions for itself (three Laws of Robotics, anyone?) then that is completely uncharted territory at this point.


    What is AI (Artificial Intelligence)? - Definition from WhatIs.com
    Artificial intelligence is the simulation of human intelligence by machines.
  • Hey Gary really interesting post and got me thinking back to the Moral Machine that provides scenarios for self-driving cars and what happens if the brake fails and how it makes and comes to decisions.

    We answer the questions with our own cognitive bias so WHOEVER programmes the AI chip in the self driving cars must show some bias which will then be inherent in the cars processor and would drive decisions. Surely this can't be helped - after all we're not machines we're human.

    The Moral Machine
    Unless you've been living under a rock you will most likely have seen some news or a think tank discussing Artificial Intelligence (AI) and the impac…
    • Thanks, Mike. Yeah, recently I saw what appeared to be a very similar,if not the same system, as the Moral Machine highlights on the TV programme Guy Martin vs The Robot .

      This programme is one of the things that got me thinking about an A.I. post in the first place.

      Whoever writes these algorithms definitely have considerable responsibility at their fingertips. I hope that whoever those people are have an awareness of the biases mentioned in your post and the links within. And if not maybe HR and L&D teams are well placed to bring biases to their attention.

      The Moral Machine
      Unless you've been living under a rock you will most likely have seen some news or a think tank discussing Artificial Intelligence (AI) and the impac…
This reply was deleted.

What's Happening?

Emily Chaytor and Linda Yates are now connected
6 hours ago
Katie Robinson and Lydia Carter are now connected
7 hours ago
Gary Norris and Bryan Robertson are now connected
7 hours ago
Alex Rodenhurst, Sarah Makowski, Ruth Petzold and 3 more joined DPG Community
9 hours ago
Amy Holder posted a discussion
12 hours ago
Ginte Ambrutiene and Cherie Turrington are now connected
13 hours ago
Christopher Ridley and Will Moule are now connected
13 hours ago
Cherie Turrington updated their profile
Andy Cross is now connected with Rachael Jordan and Emma Conington
Will Moule commented on Ady Howes's blog post Surrounded by people who have nothing significant to say?
I think listening is one of the most under rated skills. As a manager I have a number of considerations around this
1. Intention - showing up for a meeting with your team and having the intention of really hearing what they have to say otherwise why…
Dan Evans and Warren Delicate are now connected