It’s easy being a cynic. I’ve found that the more time I spend in the learning industry, the easier it becomes to dismiss new ideas as fads. A new piece of hardware comes out; it will never takeoff. A new platform emerges; it will never work in my organization. Perhaps it’s time to change our framing. We’re very good at saying what won’t work, but we’re less good at highlighting what might work. Buzzwords have come to represent our world-weariness. Buzzwords are often guilty until proven innocent and that’s a tough stance from which to change the status quo. After all, we’re becoming better educated when it comes to spotting ‘snake-oil’. We understand that no solution is ever a panacea.
In our rush to call-out marketing hype, we’re increasingly dismissive of trends and innovations. As some of these trends settle in, maybe we should stop labelling ‘buzzwords’ as such as a bad thing. We’re so good at rejecting temptation, we run the risk of missing the changes we should be embracing. Here are five big ‘buzzwords’ that don’t deserve their place on the naughty step.
The first thing to know about Big Data is that it’s big. I mean really big. Chances are you do not generate anywhere near the amount of data needed to qualify as ‘big’. I was recently at a recruiting event where a Big Data expert was asked why his company’s Big Data platform was yielding insights, whereas a competitor with a similar product had failed a few years previously. “Well”, said the expert, “they only had 4 million records, so it was never going to work”. His data set? 450 million individuals, each with hundreds of data points. Analysing this scale of data takes a whole new stack of hardware and software services that are unlikely to be at the disposal of the learning department. Big data is big. What you’re more likely to deal with is just ‘data’.
That isn’t to say you don’t have a reasonable amount of data at your disposal. With new standards like xAPI and the increasing use of tools like Google Analytics within learning, there has been a surge in the amount of data available to the learning department. But its rare this would qualify as ‘big data’. Big data specialists are happy to set analysts running wild through data sets, looking for correlations and connections that appear as a result of the scale of the data. Where your data set is smaller, these patterns will be less apparent and certainly less reliable (although scale alone doesn’t mean a data set is valid). In these circumstances, you must be very targeted in your analysis of data. You must design for data; understanding the data set you will need to collect in order to answer the hypotheses you set for yourself. Without this rigour, it’s almost inevitable you will fail to gather the data you need to do the analysis.
Not a week goes by that I don’t get asked to build a dashboard for data. Generically, the term represents a page full of graphs and numbers that will impress the boss with stats about learning, performance and other good stuff. The importance of shiny should not be underestimated in gaining friends and influencing people. Nothing wrong with a dashboard in principle; my Google Analytics dashboard shows me exactly what I want to see in real-time for instance. But it works because the data underpinning it is reliable and fit for purpose. The data sets will grow as more devices and applications start churning out data (see: The Internet of Things). This is a trend that is here to stay.
Gamification has been around the block for the last couple years and has moved out of the ‘innovative’ and towards the ‘tick box’ area of procurement. Most up-to-date LMS’ play homage to basic game-like features. Most authoring tools have introduced more ‘game-like’ interactions. A lot miss the nuances of a sustainable program of engagement. Very few people seem to understand the behaviourist nature of most gamification. Behaviourist is not a dirty word by the way, it’s just that most basic gamification is geared towards influencing behaviour.
There are increasing numbers of case studies suggesting that gamification can be used in education to some effect. Gamification is a tool; a means to an end. Sometimes it will be the right tool for the job. In these cases, we shouldn’t be turned off using the techniques because we think it’s a buzzword.
Badges have seen something of a renaissance in recent times. Code Academy, Khan Academy, Team Treehouse and a whole slew of other consumer focused learning platforms have embraced badges as a means of informal certification. Seen at times as childish, the wider web has certainly embraced ‘badging’ in all of its forms. We should too. Badges are a trend with some momentum and some purpose; any reliable method of highlighting your abilities and experience that is digital and portable will be a real boost to our industry.
Two problems exist; right now they aren’t that reliable and they aren’t that portable. Many badges are proprietary in nature, you can’t really ‘port’ them anywhere other than the website they were issued on. Mozilla’s Open Badge specification is the leading way to make badges portable. But even then, the Badges can only really be exported to the Mozilla Backpack when implementing the specification. Mozilla has now somewhat stepped back from the initiative, allowing the Badge Alliance to drive things forward. Don’t take this as a sign of project death; every open source project needs to fly the nest in order to truly succeed and Mozilla presumably believe that time is now.
Reliability and the intrinsic value of a badge is a trickier proposition. Many badges lack inherent value. They aren’t hard enough to achieve. We’re often in such a rush to reward people in our gamified solutions that we devalue badges (I’m at fault here as much as anyone!). In order to fulfil the promise, badges must evolve towards holding inherent value. They should be hard to achieve. Where badges can go beyond certification is in providing the evidence for why a badge was issued. Whether an automated system or a named individual issues the badge, by providing a set of criteria (against which the badge was issued) and a set of evidence (proving that the criteria was met) that is forever linked to the badge graphic itself, we can go some distance to proving value and being seen as a reliable measure of ability. Again, Mozilla Open Badges provide the framework. Even though it has its flaws, it is certainly the specification to follow when implementing in your organisation.
xAPI / Tin Can API
The trough of disillusionment looms large on the horizon of the xAPI / Tin Can. The standard is now 18 months old and whilst adoption at face-value has been fast, implementations are still thin on the ground. This isn’t because it’s a bad idea; far from it, it’s a groundbreaking idea that we need to happen. But, like all change, it’s tough. The devil is in the detail. It is a solution in search of a problem. We know the problems exist in a macro sense; SCORM is inappropriate to track experiences in a distributed learning environment. But until enough solution designers understand the opportunity, they won’t solve problems using the methodology. This is a slow growth policy – solution designers won’t understand the opportunity until they are shown some best practices and use cases. And round and round we go…
The name is a pain. Tin Can was the project code name before it had a name. xAPI is the formal term. You can use both interchangeably; Rustici, who did much of the early work as a partner of ADL, have a vested interest in maintaining the Tin Can brand, and they have made a huge effort in producing content and libraries to support it. The ADL will back it’s own naming convention and will not change for a commercial entity. So here we are. Don’t dismiss this as a fad or as something that has been slow to emerge. Standards generally move at a snails pace. Despite the naming issues, the xAPI has made it’s way onto countless product feature pages. It just needs to be used properly.
Massive Open Online Courses (MOOCs) continue to grow in popularity and the corporate world is increasingly engaging in the conversation. For some universities, there is a clear enough strategy underpinning what we see. Overseas students are, and continue to be, a significant source of income for UK universities. At worst MOOCs serve as a tool of marketing towards these students, as well as those at home. At best, they foreshadow the university business model of the future; one that is global in nature and constantly pushing for higher standards of content and conversation. Again, here we’re quick to cast aspersions against the pedagogy and quality of MOOCs. But they are competing against the worst form of teaching; dull, hour-long lectures with PowerPoint presentations that are a decade or more old. This is progress. Most short MOOCs have had more money poured into the content creation process than an undergraduate on-campus program ever has. MOOCs are front-page news. What we’ve overlooked is how far we have come. Online learning used to be suspicious; untrustworthy. MOOCs are far from perfect, but they show us that online learning is increasingly accepted as a means of lifelong education.
Ben Betts is an entrepreneur, technologist and social learning expert. The world’s most innovative learning organisations use Ben and his team to develop products and services for online learning. He has worked with the likes of City & Guilds, Google, Pearson Education, Oxford University, Cambridge University, Duke Corporate Education, Warwick Business School, BP, Barclays, Shell, Tesco, Xerox and many more.
Ben writes and speaks on the topics of social and peer-to-peer learning around the world, including making an appearance at TEDxWarwickED. He has written for four books in the last two years, published peer-reviewed academic papers and many magazine articles. He holds an MBA, is a Doctor of Engineering and is a Fellow of the Learning & Performance Institute.
Connect with Ben on twitter @
Check out Ben’s blog by simply clicking here