Data ethics and prejudice in A.I.

Data ethics and prejudice in A.I.

I've been posting about biases recently in the interests of considering how it can help us help people. There's still a lot to consider there and I really like the post from Mike Collins on the subject of Cognitive Bias Click here to skip to the post on DPG Community

In addition to this, I've come across an article which has now got me thinking about the impact of bias on process, systems and machines.

I'm sure we all agree that technology is massive, and growing, in the workplace arena. The Taylor Report states "Technology is changing the way we live and work at a rate not seen since the Industrial Revolution," 

Perhaps the hottest topic in technology is Artificial Intelligence, which brings me to the article I want to share: 'How we transferred our bias into our machines and what we can do about it'. 

A.I. needs to be programmed. That's done by writing algorithms. The algorithms tell the system how to make predictions about what will happen in a situation - and what to do in that situation. Those predictions are based on the past e.g. fire is known to be damaging; fire requires heat; therefore to prevent damage - turn on the sprinklers if fire seems likely. Simple, effective - and welcome.

However, there is a point at which predictions become unhelpful (or unwelcome) prejudice. The podcast accompanying the article gives an example of this prejudice, where the narrator starts to type a text message in his phone about a nurse and the predictive text facility assumes (or predicts) that the nurse is female. Thus 'we' have transferred our bias into our machines.

If a system makes a decision which is later considered discriminatory, who is accountable?

As HR professionals, do we have an obligation to safeguard our organisation's strategy, products and services from unhelpful bias in technology? 

As L&D professionals, how do we achieve a learner centred approach if our system is predicting the student's needs based on the past?

I guess the final question is: had we all better add writing algorithms to our CPD plan?

Click here to read the article 'How we transferred our bias into our machines and what we can do about it' and access the podcast

Votes: 0
E-mail me when people leave their comments –

Gary is The Professional Development Community Manager

You need to be a member of DPG Community to add comments!

Join DPG Community

Comments

  • I guess it also depends on what kind (level) of AI is under consideration. http://searchcio.techtarget.com/definition/AI

    If a machine learns to make decisions for itself (three Laws of Robotics, anyone?) then that is completely uncharted territory at this point.

     

    What is AI (Artificial Intelligence)? - Definition from WhatIs.com
    Artificial intelligence is the simulation of human intelligence by machines.
  • Hey Gary really interesting post and got me thinking back to the Moral Machine that provides scenarios for self-driving cars and what happens if the brake fails and how it makes and comes to decisions.

    We answer the questions with our own cognitive bias so WHOEVER programmes the AI chip in the self driving cars must show some bias which will then be inherent in the cars processor and would drive decisions. Surely this can't be helped - after all we're not machines we're human.

    The Moral Machine
    Unless you've been living under a rock you will most likely have seen some news or a think tank discussing Artificial Intelligence (AI) and the impac…
    • Thanks, Mike. Yeah, recently I saw what appeared to be a very similar,if not the same system, as the Moral Machine highlights on the TV programme Guy Martin vs The Robot .

      2899358?profile=RESIZE_1024x1024

      This programme is one of the things that got me thinking about an A.I. post in the first place.

      Whoever writes these algorithms definitely have considerable responsibility at their fingertips. I hope that whoever those people are have an awareness of the biases mentioned in your post and the links within. And if not maybe HR and L&D teams are well placed to bring biases to their attention.

      The Moral Machine
      Unless you've been living under a rock you will most likely have seen some news or a think tank discussing Artificial Intelligence (AI) and the impac…
This reply was deleted.