I've been posting about biases recently in the interests of considering how it can help us help people. There's still a lot to consider there and I really like the post from Mike Collins on the subject of Cognitive Bias Click here to skip to the post on DPG Community
In addition to this, I've come across an article which has now got me thinking about the impact of bias on process, systems and machines.
I'm sure we all agree that technology is massive, and growing, in the workplace arena. The Taylor Report states "Technology is changing the way we live and work at a rate not seen since the Industrial Revolution,"
Perhaps the hottest topic in technology is Artificial Intelligence, which brings me to the article I want to share: 'How we transferred our bias into our machines and what we can do about it'.
A.I. needs to be programmed. That's done by writing algorithms. The algorithms tell the system how to make predictions about what will happen in a situation - and what to do in that situation. Those predictions are based on the past e.g. fire is known to be damaging; fire requires heat; therefore to prevent damage - turn on the sprinklers if fire seems likely. Simple, effective - and welcome.
However, there is a point at which predictions become unhelpful (or unwelcome) prejudice. The podcast accompanying the article gives an example of this prejudice, where the narrator starts to type a text message in his phone about a nurse and the predictive text facility assumes (or predicts) that the nurse is female. Thus 'we' have transferred our bias into our machines.
If a system makes a decision which is later considered discriminatory, who is accountable?
As HR professionals, do we have an obligation to safeguard our organisation's strategy, products and services from unhelpful bias in technology?
As L&D professionals, how do we achieve a learner centred approach if our system is predicting the student's needs based on the past?
I guess the final question is: had we all better add writing algorithms to our CPD plan?