How Do You Analyse Webinar Success?

(Authors note: This is a first set of thoughts, noted reasonably quickly, whilst they are fresh in my mind. I am interested to know what people think and what approaches people take to analysing their own webinars.)

Today was a good day. At lunchtime, I co-hosted a webinar entitled “Managing and Analysing Data to Understand Learning Impact”. My co-host was Stephanie Stretton of Rosetta Stone, a smart, interesting and thoughtful coworker and an excellent addition to any network. The show was managed by the LPI who do seem to be able to gather an interesting and eager audience. This was not different, they contributed fully and kept me on my toes too. Michael Strawbridge was our capable and attentive ringleader, as usual.

In the 45 minutes, as they flew past us, we covered some fundamental themes of digital: ever heightened expectations of a good user experience; the key to personalisation being personal relevance and the idea of signals of value being more useful than the quest for proof. (I have posted on that last point before).

My perception is that it went well. It was certainly enjoyable, the participants were active and meeting new people with different ideas and other views is always gratifying. As I sat on the tube to my following meeting, I started to wonder though, in the spirit of data analysis, what should I look at to test my hypothesis that it went well? What are the data signals of value?

So, here are some thoughts on the kinds of signals I think might make for a good analysis of the event. I realise that there are many webinar and VILT specialists out there and I am really keen to know what you look at to assess how your events work. I also realise that this data may be standard from platform providers (is it?), so this might not be new – working out loud and all that.

I have divided them up into some categories that seem relevant as I type. There are too many here but this is something of a brainstorm for blogging, please bear with me.

Participant engagement (I don’t like that word but it seems to have stuck):

% of finishers (those who stayed to the end)
And drop off rate for those whose glass is half empty

Number of commenters

% of commenters per participant (comment reach)

Comments per participant (comment depth)

Comment length – single words versus longer form

Number of repeat commenters – the engines or domineers?

% of repeat commenters from total commenters (comment concentration?)
Number of peer to peer comments amongst participants (conversation strength?)

Some kind of sentiment analysis might be interesting for a large audience too or for certain topics?

We didn’t use audio so measures of ‘speaking’ versus commenting would not have helped but seems like a good metric to me of you do use it

And finally…if you disable commenting and speaking how do you know what is going on?

Satisfaction:

Some kind of simple rating would be good at the close: how useful was this webinar on a 1,2,3 scale maybe; did you get what you came for etc.

I do like a Net Promoter Score but a survey might be over-egging it

Commercial measures (these are invented by me so not a reflection of partner needs):

Cost per participant (efficiency?)

Lead generation:

Reach to new contacts

Social follows etc, form attendees

Social mentions from attendees – new and existing

Conversion of attendees to other events

Conversion of attendees from other events

% of membership or customer base attending (internal reach)

Marketing measures

Number of registrations

Sources of registration (email, socials, direct etc.)

Conversion rate: % of reached audience who registered

Activation: % of registrants who attended

Profile of participants – not sure about the data here and what agreements are in place re data usage

Downloads of webinar recording

Requests for slides/content

Contact from attendees

A closing point: any of these metrics will need a comparison point to understand them properly, preferably trend data. So data will need to be gathered and analysed for more than one event to get a sense of what good might look like.

So, what do you think? What have I missed? Have I missed the point? There is too much here, for sure, so what seems most useful to you?

About the author Myles Runham:

Experienced consultant, senior manager and general manager of online and digital business in the private and public sector. A particular depth of experience in leading the development of digital and online learning, training and development projects and products in the corporate and education worlds. Extensive experience of digital learning strategy, implementation and digital product strategy.

Now working as an independent consultant in digital and learning. This includes working in an advisory capacity for organisations, businesses and teams considering how to respond to the challenges of digital learning and the changing nature of learning for work.

Myles is a consultant at The learning & Performance Institute

Connect with Myles on LinkedIn

Follow Myles on Twitter @mylesrun

Leave a reply

CONTACT US

Please leave your message here and we will get back to you as soon as possible.

Sending

©2020 Learning Professional Network

Privacy Policy