Recently they got together to share their experiences about how technology can transform the educational experience.
Steve: What led you to think of the Glisser idea in the first place?
Mike: The concept was developed out of a need I had as a marketer – that significant budget was being spent on live events, but the tools and processes for measuring their effectiveness were somewhat primitive. Most events are measured on attendance, and ‘gut-feel’ from the people involved – to me that’s not scientific enough.
So at its heart, Glisser is about better understanding an audience through data, and technology is just a means to collecting that data efficiently. We use interaction and engagement functionality, and well as live content sharing, because these are simple things that help trigger a response from the participants.
In saying that, I like to also think that the idea was partly a result of my frustrations with being in the audience, in corporate events, and in the classroom. I find so few of these sessions effective because they are so passive. If I want a stream of information, I’ll just read a book or find it online. The whole point of getting people in the same room is to create a dialogue, or benefit from the wisdom and interaction of the crowd. For me, technology can help drive that.
When I studied at Oxford, I’ve loved the tutorials (of course), got a lot out of library hours, but skipped most of the lectures since they didn’t add any more value than reading the content in my own time. Is this a question of individual study preferences, or a more common issue around the quality of lectures?
Steve: I think people show a very wide range of tolerance to sitting in a room and obediently paying attention; there are some people who can focus in that environment, and others who can’t at all. But even the most tolerant lecture-junkie has a limited attention span, and we know from research that passive listening is simply terrible for retention. For me, I’ve learned that what you want to achieve in lectures and presentations is rarely simple information transfer – it’s about kick starting someone’s thought processes. You can point people to big themes, you can communicate enthusiasm: you can even leave people confused – but you’ve got to stimulate some thinking. I guess that sometimes about the ‘wisdom of the crowd’ – but I think it’s also true if you’re teaching something substantive. That’s why over the years since you were in Oxford I’ve experimented extensively with student response systems of one sort or another.
Mike: Student Response Systems have been around since the 1960s – have you seen any academic proof that they are effective at improving learning outcomes?
Steve: There’s no shortage of research on these systems, although not all of the published work is very rigorous –it’s quite tricky to set up strict controlled experiments. (I’ll give you a reading list… for old times’ sake!) But to be fair I’ve seen a lot of very weak attempts at using the technology too – and sometimes it can fall completely flat. One problem is when a presenter asks a question and you can tell that – really – they’re not interested in what the answer from the audience is. “Oh! Look! Some people say X! And others say Y! Well, moving on…” So there seems to be some craft in setting up the interaction so that it’s genuinely interesting, and not just a gimmick. Do you see this issue?
Mike: Yes, it’s a common challenge, particularly for new users, and probably more so in the corporate space. It’s actually one of the reasons we’re really excited about working in the education sector, since tutors and lecturers seem to be far better at crafting questions that get interesting results.
To some extent, it stems out of the limitations we’ve had to poll audiences in the old fashioned way (“hands up if…”) or by limited multiple-choice “clickers”. People translate this format to a voting poll and hope it will work. The far more effective things we’ve seen is where people are creating questions that seek a distribution of answers. For example, asking people (on a scale of 1 to 10) whether they consider themselves a creative thinker or more logical. A spectrum of answers creates a more evocative discussion, because the range of distribution is interesting in itself, and because the responses are often more unexpected to the audience. So are there specific scenarios where you’ve seen it work really well?
Steve: We've found a key value in interaction is to let people see how their views or ideas contrast with others'. Nowadays I often try to harvest answers to questions in which we've embedded scales (a bit like a psychometric test) and then show people where they fit in the distribution: for example, in one exercise, I use this to show how people see a business problem as a technical or a human problem. Using technology lets you do this almost instantly - and that seems to be very powerful. However, I've recently taken to doing some of these things without the technology (getting people to fill in a paper questionnaire) because they tend to take a few more moments reflecting; with the clickers or the phone-based system, I've noticed some people seem to treat it as a race to press the buttons as quickly as possible. I haven't yet, though, come up with a rule of thumb about when which approach is best.
I’ve also noted that if people haven’t thought through how interaction is used, it can end up knocking people off track and they end up losing the thread: do you see this as a problem when Glisser is used to capture comments and questions from the classroom?
Mike: Presenting content or teaching a group of people is something a lot of people have to do at some point in their lifetime, and it’s not necessarily a skill we are taught. Meanwhile, we’re instructed how to write and gather thoughts on a page in long-form, but very few are trained in creating content for a killer presentation or lecture. Then people are so scared to screw up in front of a crowd, it’s no surprise that the results are so dull.
Adding more interaction, particularly where it’s scheduled into the narrative, shouldn’t be a reason for going off course. But at the same time, the dialogue created by the interaction shouldn’t be ignored, but encouraged. It’s your audience or class thinking around the topic, voicing opinion, challenging assumptions – the things they couldn’t do easily if they were just reading a book or watching a video.
So I guess the answer I’m getting to is that you still need a lecture structure, and obviously some key points to make, but within that framework there should be scope for exploration of interesting things, and real classroom engagement.
Steve: In the educational world, we often talk about the 'flipping the classroom' - putting the onus on the learners to do the heavy lifting, rather than being passive. One way interaction can be used is to show who's prepared well - but this often brings a kind of shame on the rest. Do you have thoughts on when identifying respondents is better than anonymity?
Mike: This hits one of the key benefits that technology brings to the classroom. Traditional engagement exercises are very open and, as you know, this can have some distortive effects. For example, introverted people might be less engaged generally, or actively avoid participating. On the other hand, particularly charismatic or vocal students could influence the answers of those still undecided, so as an educator you’re not getting a true measure of their abilities. A tool that enables instant and unbiased Q&A is therefore a useful addition to any tutor’s arsenal.
The other benefit is that while there is anonymity in the room, there’s no reason why the students can’t be identified to the teacher in the analytics platform. This allows the educator to benefit from information specific to each student, and tailor their teaching accordingly, without the stigma created by the openness of the classroom.
Are there situations where in-room identification of respondents is valuable? I think so, where there are further points to be explored, or where the identification adds something to the discussion. In fact, some teachers might see an opportunity to begin lesson under anonymous conditions and see the effect of identification later (does this change student responses?) or even A/B test the same lesson delivered differently to two classes.
Steve: You raise a key point here: there’s enormous scope for using a more scientific approach in our teaching, and using technology for classroom interaction could be a much bigger part of that.
Mike: So who is pushing the boundaries when it comes to technology in the classroom? Universities centrally, or groups of passionate individuals taking a leap of faith and experimenting?
Steve: I think at the moment it remains an area driven by individual enthusiasts and pioneers: while academics are trusted to teach, and not managed in a top-down fashion, I guess there will continue to be a patchy and piecemeal uptake. But I think the upside of this is that there is a lot of experimentation and innovation – and that must be a good thing. The great thing about the ubiquity of smartphones and wi-fi - and tools like Glisser - is that it’s so easy to get up and running at essentially zero cost: you don’t have to invest in dedicated hardware (the old-style clickers) to start exploring the technology.
When I was teaching you in the late nineties we were still lecturing with acetate slides and overhead projectors – using PowerPoint with a projector was a real novelty then. And showing a video clip was something that usually involved getting a technician to come help you. But where do you think the technology will take us next?
Mike: The most exciting part of operating Glisser in the education sector is the willingness to try new ways of working and pushing the technology further. Since educators are presenting day-in day-out, there’s a hyper-fast cycle of try-analyse-feedback-repeat that’s generating some really interesting directions we could go in. And part of our job is to evaluate so many options.
For me it comes down to the core concept of bringing people into the same room to teach and learn, and to make sure technology makes that better.
So what does that mean? For me, it’s about creating functionality which leverages the fact that you’ve spent the time to bring people together. It’s about co-creating things, or rapidly analysing the opinions of a group in a coherent and democratic manner, or making the teacher-student experience more effective in the lecture theatre.
Taking that last point for example. If you think about a group of students, I imagine tutors seek to deliver lectures that are paced and set at a level of difficulty based around a normal distribution. They accept that the pace will be too slow for some of the brightest, and maybe too challenging for some of the weakest, but the bulk of students will be well served.
What if technology could help tutors manage that more scientifically? To see the specific points within a lecture where individuals were challenged or unchallenged and begin to tailor content accordingly. To pace a lesson based upon real-time feedback of student understanding, and feed that into the next iteration of the content?
Steve: Interesting! I think you’re right – and it reinforces to me that when we think about interaction in the classroom we’re in the zone of genuine disruption to established practice. Sure – there are some cases where an interactive quiz is an amusing moment of light relief. But the real opportunity here is how we can transform educational practice.
Steve’s suggested readings
Kay, R.H. and LeSage, A. (2009)
“Examining the benefits and challenges of using audience response systems: A review of the literature.” Computers & Education 53/3: 819-827.
Stowell, J.R. and Nelson, J.M. (2007)
“Benefits of electronic audience response systems on student participation, learning, and emotion.” Teaching of Psychology, 34/4: 253-258.
Blasco-Arcas, L., Buil, I., Hernández-Ortega, B. and Sese, F. J. (2013)
“Using clickers in class. The role of interactivity, active collaborative learning and engagement in learning performance.” Computers & Education, 62: 102-110.
Kirkwood, A. and Price, L. (2014)
“Technology-enhanced learning and teaching in higher education: what is ‘enhanced’and how do we know? A critical literature review.” Learning, Media and Technology, 39/1: 6-36.
Moss, K., and Crowley, M. (2011)
“Effective learning in science: The use of personal response systems with a wide range of audiences.” Computers & Education, 56/1:36-43.
Hunsu, N.J., Adesope, O., and Bayly, D.J. (2016)
“A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect.” Computers & Education, 94, 102-119.
Taneja, A., Fiore, V. and Fischer, B. (2015)
“Cyber-slacking in the classroom: Potential for digital distraction in the new age.” Computers & Education, 82, 141-151.
Lucke, T., Keyssner, U. and Dunn, P. (2013)
“The use of a classroom response system to more effectively flip the classroom.” In: Frontiers in Education Conference. New York: IEEE. 491-495
Henrie, C.R., Halverson, L.R., and Graham, C.R. (2015)
“Measuring student engagement in technology-mediated learning: A review.” Computers & Education, 90, 36-53.
Richardson, A.M., Dunn, P.K., McDonald, C. and Oprescu, F. (2015)
“CRiSP: an instrument for assessing student perceptions of classroom response systems.” Journal of Science Education and Technology, 24/4: 432-447.
Baltaci-Goktalay, S. (2016)
“How personal response systems promote active learning in science education?” Journal of Learning and Teaching in Digital Age 1/1: 47-54.