The role of data and analytics in evaluation

The role of data and analytics in evaluation

What is the sort of data we should be gathering and what questions should we ask?

Last week, I saw a session at The Festival of Work about Learning Data and Analytics at capacity.

No more space, not one seat, packed, full, people turned away (I myself had to watch from the side lines from the standing room only section).

Data and analytics, it seems is a hot topic right now and as ever you can find this topic explored already in some detail here on the DPG Community. There are articles here and here that are well worth reading that link to data in L&D.

So why now?

I think the focus on data has always been there but one of the issues is accessibility and the type of data & information we as an industry have focussed on. I also think data is being looked upon to understand relationships and the way in which we draw conclusions and make links to how and why things are as they are.

There is a seismic shift across all industries to use the abundance of data & insight now available. Using this data to help and better understand customer behaviour, customer lifecycles, problems we’re looking to solve, product / service satisfaction, more effective product development, credibility, buying habits and trends and positive and developmental feedback (reviews). We live in a digital world and this has changed our relationship with data, but the principle of data analysis and product development can apply to all aspects of L&D. For me data helps us ask fundamentally better questions or helps us know the right questions to ask.

If L&D were a start-up business, developing a suite of products, this is also exactly the type of data we should be looking to gather or questions to ask in terms of:

  • Clearly identifying the problem / challenge / pain faced by potential customer
  • Credibility – consultative conversations with customer to ensure deep understanding and right approach is used with valid recommendations provided that meet that need.
  • Customer lifecycles – where and at what point is support / training / solutions required to address any problems
  • Customer behaviour – how do customers interact with a product to address problem, where do they access it or buy it and crucially how does the product change / modify behaviour in a positive way and ‘solve’ problem.
  • Product satisfaction (feedback on learning solutions / performance support tools)
  • Product development / continuous improvement based on feedback

Maybe one of the previous challenges for L&D is the ‘product’ has been predominantly a face to face classroom session or a face to face workshop and we just haven’t thought about what we do and how we do it in these terms before. The questions and approach above can be applied to any L&D product but I just don’t think we think as commercially as we should when it comes to demonstrating value and measuring success. As a cost centre L&D need to be able to demonstrate the value they provide, and more is being demanded of us than ever before.

To respond to this demand, more and more L&D teams are working using agile principles and methodology. Rapidly prototyping and developing minimal viable products quickly and far more collaboratively with end users to ensure whatever product is developed meets the need of the end user. Pilots, experiments and beta products are all things you will find in other industries, but we don’t tend to embrace these approaches as much as we perhaps should. One of the things I’ve been thinking about is how we link data & insight to asking and answering the questions above to traditional L&D evaluation.

Evaluation in L&D needs to evolve. I’m not suggesting that the likes of the Kirkpatrick Evaluation Model (Link) are obsolete. As a Kirkpatrick Partner we are big believers in the KP model. The framework that the KP model sets out helps still provides a structure to follow that can be adapted to each of the elements listed above. It’s a move from an old training mindset to that of a data driven and evidence-based department who focus less on individual training events to that of overall engagement and customer or employee lifecycles and product / performance improvement.

Rather than taking the 4 levels as a linear model and moving through them 1 by 1, analysing the key elements of each and applying them in a way that meets the need of the customer. For example:

Results (level 4)

Start with the end in mind, analyse challenges and pain points – what does success look like and how will it be measured. Consultative approach working collaboratively with customer.

Learning (level 2)

Make a recommendation and begin prototyping with the customer to build an MVP. Get it in front of people to test and use. Pilot, pilot, pilot.

Reaction (Level 1)

Get effective user feedback, is it effective, easy to use, easy to understand, what works what doesn’t, how can it be improved, collaborate and demand feedback.

Behaviour/Transfer (Level 3)

Does it have the desired impact, does it change behaviour, provide new skills, can this be observed, how does this link to the desired success and measures identified.

If yes – release and get back to & track ‘Results’

If no – back to Learning and better product development

Repeat.

OK, so I appreciate this is a crude example but the point I’m making is that we have always looked at evaluation as something that we do to other people or something we apply to the end of the training process as opposed to something that we apply to our process and continually analyse the review the feedback and data we have.

To do this in a much more agile way and to work much closer with our customers to provide better products and service. Working with pilot groups and working in sprints can speed up the development process and make things so much quicker. Today, we don’t need to wait 3-6 months to analyse a change in behaviour and we also don’t have to rely on outdated modes of gathering feedback. Apps, pulse surveys and polls can get us closer to the data we need much quicker if we’re prepared to experiment and explore.

In this age of data, we need to look at our own processes and approaches to stay relevant and keep up with the demands of our customers. Not a complete overhaul of everything we do but looking at what we do and how we do it through other lenses and borrowing ideas and concepts from other industries to make us more resilient and responsive.

Perhaps learning analytics while the latest buzz word is just another way for L&D to evolve its’ practices and that’s why there is such a focus on it. For me though it isn’t about disregarding everything that we’ve done before but using that we can see and experience to make better and more informed decisions. Yes, there is an element of shiny new technology, analytics dashboards and ‘evidence based’ approaches but this needs to blend together with ‘evaluation’ of sorts and we can learn a lot from new forms of data but also our existing data and use it in a more considered and effective way.

What do you think?

How are you using data & evidence-based practice to improve your product development?

Is the way you do evaluation changing?

If you are interested in finding more about how to use a data and evidence based practice to your Learning and Development activities, check out our Kirkpatrick Four Levels Certificate

 

 

Votes: 0
E-mail me when people leave their comments –

Ady Howes - Community Manager, DPG

You need to be a member of DPG Community to add comments!

Join DPG Community

Comments

  •  Well put Ady, evaluation is often a panicked afterthought when it should be what we start with. We need to get better at drawing out from our stakeholders what that end goal is and how we measure it. Otherwise we are wasting our time quite frankly!

This reply was deleted.

Get Involved

Start a discussion in one of the following Zones