Lecture Capture: will my students show up?

One of the recent game changers in Higher Education in the UK is the introduction of lecture capture.  Across the sector, the use of lecture capture to record live lectures is now commonly adopted with 86% of universities in the UK using lecture capture. It is now being introduced more widely in the university where I work, so naturally there has been much debate and discussion about the potential impact of this initiative on student attendance and on student learning. Much discussion has focused on a deficit model particularly that attendance will drop and students will rely on recordings for cramming.

I wanted to understand better what we know from research on these issues and what are the implications for academic practice? So that’s what this blog post is about.

Attendance

Before exploring lecture capture specifically, simply looking at evidence related to student attendance and student performance, the literature shows a positive link between these two factors. In a meta-analytic review, attendance was found to have a strong relationship with final course grade and importantly was a better predictor of academic performance than for example study habits, study skills and standardized admission tests. Interestingly, research in a medical education context where the teaching sessions were highly interactive, showed a mean attendance rate of 89%. A positive correlation was found between attendance and overall examination score (r = 0.59 [95%CI, 0.44-0.70]; P < .001). Distinction grades (above 60%) were obtained by students with attendance rates of 80% or higher.

attendance

Figure 1. Correlation Between Total Attendance Rate and Overall Examination Score with the line of best fit (Deane and Murphy, 2013)

Further research from a randomized experimental approach suggest that on average, the effect of attending lectures corresponds to a 9.4 percent to 18.0 percent improvement in exam performance. It’s not a surprise to recognise that student engagement and attendance has a positive impact on their learning success. So what about lecture capture where recordings of lectures are available to students? Will the face to face lecture be replaced by a recording, which is possibly how many students might think?

Attendance and performance with lecture capture

A study on attendance and lecture capture was carried out with 396 first-year students attending a mandatory course on biological psychology. All the lectures were of a traditional university style with a teacher in front of the class covering content.  Four different groups were identified when it comes to attendance and using recorded lectures: non-users, viewers, visitors and supplementers.

box (2)

Student membership of these categories was not fixed and changed between semesters. A large number of students used the recorded lectures as a substitute for lecture attendance (viewers). In the first assessment which focused on developing a knowledge base, the students who used the recorded lectures as a supplement scored significantly higher. In the second assessment which focused on higher order thinking skills, no significant differences were found between using recording lectures and attending lectures. A correlation study on time spent and exam score for the first knowledge base assessment found that only watching online lectures (viewers) had the lowest correlation and only attending lectures (visitors) had the highest correlation. The supplementers who had an increased time on task did not result in a higher exam score. Again, as in other research attendance at face-to face lectures is a strong predictor of academic performance. Recent research within a UK context investigated the impact of attendance, lecture recording, and student attainment across four years of an undergraduate psychology programme. For first year students, supplemental use was beneficial but only stronger students could overcome the impact of low attendance. Weaker students did not observe grade improvements from additional use of recordings if their attendance was poor. The predictive aspect on student attainment was only found for first year students.

The benefits of lecture capture

The positive appreciation by students of having recorded lectures available will not come as a surprise to my colleagues. The ability of students to review recordings available during a course particularly where they have difficulty understanding new content or terminology is a beneficial support tool. I know from student feedback where I work that some students don’t like gaps deliberately placed in lecture slide handouts. There are a number of issues at play here, some students are anxious that they have missed important points, others are over relying on studying just with the lecture slides and incomplete versions don’t support this and obviously those who don’t attend won’t have heard the lecture unless they have a recording. Research within a health science context where there is a high content load examined lecture recordings where a podcast and the lecture slides were available. Some students did listen to the podcast during other activities (e.g. commuting) but most accessed the resources and viewed the lecture slides on a computer. The flexibility with this approach and being able to hear the lecture again were the most useful aspects from students.

Implications for practice

The goal in higher education is to support the development of independent adult learners, equipped with the skills and competencies to thrive in a competitive global market. The student population in Higher Education is diverse, with widening participation, this includes more mature students, international students, students with a disability, students from under-represented groups (e.g. Black Minority Ethnic) and students who are first in the family to attend university. With respect to attendance, students may choose not to attend for a variety of different factors, the time of the lecture, the timetable schedule for that day. I know that some students will actively decide not to attend a lecture because of commuting time and costs especially if there are few activities scheduled for that day.

Do learners understand the importance of lecture attendance?

I know that we tell them it is important but actually sharing evidence on the relationship between lecture attendance and student grades. I am doing other research where I have interviewed successful students and attendance is mentioned a lot so sharing student quotes could hopefully hammer home the point. An informed choice for adult learners is what I am getting at here. It also reminds me of feedback and students being aware of what feedback is and noticing it within their educational experiences. We have to support students in navigating this new university world they find themselves within.

How can we advise students on using lecture capture?

A recent pre-print of a publication from Emily Nordmann et al. provides practical recommendations on lecture capture for staff and students; also provided are helpful infographics that are freely available to download and use (highly recommended). Local guides are essential to communicate expectations with students, for example “7 ways to get the most out of Lecture Capture: A Guide for students. It’s about messaging and involving students in different ways.

Will lecture capture change teaching approaches?

The cons – The concern is that lecturers may feel inhibited to be creative with their teaching, try out new initiatives or tell jokes. The common policies on lecture capture include guidance that staff pause recordings for any interactive discussions with students or any confidential topics are not included in lecture capture. There’s tensions about recordings being available to view and how they might be used. It would be a shame if lectures became less interactive, lacked energy and were didactic in delivery. Actually, I am not sure if I could watch such recordings anyway, probably just fast forward to the parts I am interested in hearing and leave the rest.

The pros- I believe there’s an educational opportunity here, to critically discuss the purpose of the lecture and how it is delivered. If a lecturer is the only person talking during a lecture, then I do think a recording can suitably replace the lecture. Would I attend if I can listen to it all online at a time that is more flexible and suits me better? Interactivity within lectures, using technologies such as mentimeter for large class sizes to promote inclusive activities, ensuring that learners are not passive participants in a lecture will engage students better but also promote meaningful learning that isn’t so easily captured with a recording especially if the interactive aspects cannot be recorded for privacy constraints.

There’s a lot to think about with lecture capture and if you have any comments please tweet @suzannefergus or leave a comment below

Student Marking Exemplars – flipping a revision session for answering a SAQ

How to answer a Short Answer Question? This is the most frequent request I get from students when asked what they would like to cover in a study revision session. Their exams consist of a combination of Multiple Choice Questions (MCQs) and Short Answer Questions (SAQs). Typically for each student their performance on the SAQs is lower than for the MCQs. Not a surprise, it’s much harder to synthesise an answer on a blank sheet of paper than to recognise or work out the correct answer from a list of plausible options.

My 1st year chemistry students experience answering a formative SAQ under exam conditions ahead of a summative mid-term test to help familiarise them with what to expect. Feedback during this session helps to clarify what a good SAQ answer consists of using the formative SAQ as a specific example.

The key message I emphasise is to show both knowledge and understanding in their answers. If the question uses list or identify, then this requires knowledge of the correct answer. It is not asking to explain your answer or justify your reasoning. If the question asks for an explanation or to discuss the answer then understanding is required. The weighting on question parts within a SAQ can also indicate this with knowledge only questions weighted less than knowledge + understanding questions.

knowledge

The students were also provided with written guidance on answering SAQs which included the main types of questions for 1st year chemistry.                               Top Tips for answering a SAQ                                               

top tips SAQ

A short answer questions means that the answer is relatively short compared to an essay. The answer may involve working out a calculation or may consist of several sentences and could include chemical structures/diagrams.

An answer that’s longer than necessary won’t cause you to lose marks, as long as everything you write is correct. But if incorrect, you will lose marks. In addition, writing more than necessary wastes time that could be spent on other questions. So you should only write more if you think it’s correct and crucial to answering the question.

Despite this guidance and further feedback from all staff within the module during smaller workshops, there was still a lack of confidence from the students with SAQs.

So I decided to flip a revision activity and provide the students with two exemplars answering a SAQ on the sample exam paper. I created the two answers, both with deliberate errors and the activity required the students to mark the questions. I explained that the students should mark both answers according to a 40:60 ratio for knowledge: understanding demonstrated in each answer. I wanted to revisit the initial guidance from the formative workshop on knowledge and understanding. Answer A was almost a top answer but included an error. Answer B was incorrect but aspects of the answer demonstrated correct understanding of the key concepts asked in the question. The aim here was to illustrate how we look to reward correct knowledge and understanding, that chemistry SAQs are not necessarily text only as they may include chemical structures or figures. I marked Answer A as 90% and Answer B as 40%.

It was clear early on that this activity challenged the students to think, they had to reflect on their own level of knowledge and understanding of the topic whilst marking the answers.

What was most interesting was the marks the students gave to both Answer A (my mark 90%) and Answer B (my mark 40%). I used mentimeter to capture their marks and it caused amusement when student marks were polled as the range of marks was indeed quite large!

The average student mark for Answer A was 56% and the average for Answer B was 54%. The highest mark for Answer A was 80% and the lowest was 20%. A similar pattern was seen for Answer B, highest mark 82%, lowest 30%. The students were challenged with the process of evaluation and from Bloom’s Taxonomy, unless good knowledge and understanding exists as a strong foundation, the intellectual skills of evaluating and analysing will be difficult to achieve. In terms of Answer B, 1/3 of the students gave a grade above 60%. Student feedback on the session was very positive, it got them thinking about SAQs from a new perspective and it was good feedback to them on what they needed to revisit in their revision. I used this activity as a pilot and will definitely want to repeat again including other topics and the critical aspect is to include some common errors or misconceptions in the exemplar answers.

Blooms

Bloom’s Taxonomy (I don’t represent anymore using a pyramid)

Apple Logo, Amide bond…

One of my first steps and explorations into chemistry education research involved creating a diagnostic test to use with 1st year students in the chemistry module on both the Pharmacy and Pharmaceutical Science programmes. This approach was previously published in New Directions (see here) and consisted of 40 questions on fundamental chemistry principles pertaining to organic chemistry and basic chemistry knowledge that 1st year students on our programmes should have acquired based on the entry requirements for both programmes. One of the questions yielded unexpected outcomes with both 1st year and 3rd year students when the diagnostic test was piloted and this outcome has been consistently replicated annually with 1st year students!

The question I am referring to is

“Draw the chemical structure of an amide bond”

In the published pilot of the diagnostic test, this question was poorly answered with 80% of the answers incorrect. What has been observed each year the diagnostic test is administered is a similar result, students do attempt to answer the question, recognise there is a nitrogen but are unable to recall correctly the chemical structure of an amide bond. Many draw an amine bond. Other functional groups feature in the diagnostic test with questions that focus on the recognition of functional groups not on drawing them. These questions have higher percentages of correct answers. This year I included a question that required recognition of an amide bond from a selection of four chemical structures. Not surprisingly, a higher number of answers were correct for the question – 58%.

amide question

Figure 1. A question on the recognition of the amide functional group

There is something about correctly drawing the structure from visual memory that is causing difficulties. My hunch is that the problem lies in the input approach used by students for this type of information and the manner in which it is studied. Visual memories in humans is exceptionally good however exposure does not lead to enhanced memory which has been demonstrated by the classic penny experiment, locating the nearest fire extinguisher and more recently (which is my favourite) the Apple logo (see article here). This explicit memory or intentional retrieval of encountered events is poor for daily interactions with objects. Constant exposure and interaction however does not lead to accurate spatial recall.

The Apple logo experiment is a repeat of the classic penny experiment where participants were asked to recall the logo from memory and recognise the logo from a set of alternatives. They also measured metamemory judgements to assess confidence either before or after drawing the logo from memory with users rating on a 10-point scale their level of confidence.

apple logo

Figure 2. An example of some stimuli used in the recognition task (the correct logo is not shown here!)

In general, the participants (85 undergraduate students) were not able to draw the logo from memory. One participant drew it perfectly and seven participants had 3 or fewer errors. In terms of recognition, less than half (47%) correctly recognised the correct logo. There was an advantage with apple users over non-apple users but the difference was not significant. Not surprisingly, the confidence of participants (26 students) was higher prior to recalling the Apple logo (M 8.58) and dropped (to M 5.54) after the retrieval process.

Ok so recalling the amide bond requires conscious thought and for many students it has associations with a nitrogen but they cannot correctly recall the chemical structure. I do emphasise in my diagnostic test feedback, notice the word amide contains a D for double bond as a cue to help them trigger this recall. How many students test themselves when studying or are they rereading their lecture notes with an increased familiarity and hence confidence with such repetition!

 

Teaching with Mentimeter

I discovered mentimeter over a year ago and was eager to implement this technology within my teaching and obviously evaluate its use and effectiveness. So in this blog post, I will share my thoughts, be open about the lessons I have learned along the way and suggest useful ways to include mentimeter within presentations or classroom activities.

Mentimeter is an online interactive presentation tool, with no plug-in required and it’s free! (an essential factor for many educators when considering using or piloting a new technology).  There is no limit to the audience size and each presentation can include two different question types or 5 quiz questions. I have used the open-ended comments, word cloud and multiple choice questions formats. Other options are scales where participants can rate a statement, 100 points for prioritizing items and a gamification option to find a winner. I think the range of options and ease of use make it my favourite new technology tool to use. There are payment plans which allow greater functionality particularly to include branding and exporting data to excel but personally for teaching and interactivity the free version is ideal.

I first used mentimeter to understand better how my 1st year students were finding their chemistry module. Have you like me experienced that moment when you ask students for feedback or pose a question to be met with a sea of silence? I can understand this, students are reluctant to speak up in a large group (particularly 1st year at university) and if they do, it can be to tell you what they believe you want to hear.  Therefore, I introduced the open-ended comment function with my 1st year Pharmacy and Pharmaceutical Science students. It is the online version of write down on a post-it note your thoughts to the following question “What are your biggest challenges with learning chemistry?” The great thing with this mentimeter option is that the comments are anonymous so students can freely post their replies. From a teaching perspective I want to hear all comments, positive and negative, so removing the identity aspect, this allows students freedom to respond honestly. It allows a more inclusive approach, providing a mechanism for the ‘quiet learner’ to participate. I have also used this function during induction activities to ask students “how will studying at university differ from your previous school experience?” and engage them into this discussion.

Learning at university

The students really enjoy watching the comments appear on the screen. There is a profanity filter too although I have never had any issues with the comments written, just once when a student posted there would be a party and gave the details. We had to have a group conversation about professionalism. It didn’t happen again!

I have used multiple choice questions during lectures which is a great way to test understanding. There is the possibility to use peer-instruction with this approach and after having read “Make it Stick: The Science of Successful Learning” earlier this year, I incorporated using questions prior to instruction to help maximise learning. I invited students to answer a question on chemical bonding at the start of the lecture with 8% of students selecting the correct answer. At the end of the lecture, it increased to 61%. I was very excited to see this in action.

My third mentimeter use was with a Word cloud to understand how my 1st year students were feeling about chemistry at the very start of the module. I allowed students to respond more than once which is the lesson learned. They got a bit giddy seeing the words appear as the word cloud created live and some students started to include inappropriate words and slang that I had to look up to understand! So next time they won’t have the option to post more than once.

word cloud.png

Some of my colleagues have expressed concern about students using their phones during lectures and the potential distractions but I haven’t found this problematic. Setting expectations and managing unwanted classroom behaviours is always important. It has been very much a positive addition to my teaching practice. I have understood my students much more than previously and with my initial pilots, I have been impressed with the potential to use mentimeter from the instant interaction at the start of a presentation and inclusive student engagement to more structured learning developments.

PS I have no involvement with mentimeter, just sharing my experiences.

 

Learning Styles and Education Myths – an academic journey

This past year I had a great opportunity to work in our Learning and Teaching Innovation Centre and part of my role was teaching on the PGCert in HE more commonly known in my institution as CPAD (Continued Professional Academic Development).

I was module lead for Linking Pedagogic Theory to Practice, with a focus on learning through critical reflection. Those details are purely for context and will lead into what this blog post is really all about…..Learning styles and urban myths in education.

When I started my academic career, over 10 years ago, I was introduced to learning styles during my own CPAD journey. We spent time completing a purchased questionnaire and associated booklet within our multidisciplinary groups, followed by a buzz around the table “Oh I’m a reflector”, “I’m a pragmatist”, “I’m an activist”. Now, I was a complete novice to learning theory but eager to understand anything that might help my teaching and learning styles sounded like something tangible to use and apply. The key message from that session – students are not the same and learn differently. However, there was no what next, no critical discussion or evaluations from practice. I could hardly hand out a questionnaire to all my students and even if I did, what do I change in my teaching? I was somewhat confused…

dublin-2507902_1920

I needed clarity and thankfully more recent publications help to critically review learning styles. So fast forward to the present CPAD where I have the opportunity to facilitate on learning styles and we get to grips with the concept and more importantly the lack of evidence to support them.

Learning styles refers to the belief that different people learn information in different ways.

There are at least 71 different learning styles described by Coffield et al. (2004) so this has been a vast area of interest among professional educators over the years not to mention the commercial activities associated with the instrument tools and resources.

The premise behind learning styles is that if the teaching approach matches the preferred mode of learning style this will lead to improved performance.  So for a “visual learner”, information should be presented visually to match or mesh with the learning style.

Pashler et al. (2009) provide an excellent critical review including their methodology to test the learning style hypothesis.

Any research study must:

1. Divide learners into two or more groups (e.g. visual and auditory)

2. Within each learning-style group, learners must be randomly assigned to one of at least two different instructional methods

3. The same test must be used with all learners

4. The results must demonstrate that the learning method providing optimal test performance of one learning-style group is different to the learning method that optimizes the performance of a second learning-style group.

Pashler et al. illustrate this with crossover interactions of acceptable evidence where the learning method with the highest mean test score for one category of learners is different from the learning method producing the highest mean test score for the other category of learners. In other words, if the same learning method optimizes the mean test score of both groups, the result does not provide evidence to support the learning style hypothesis.

Pasher et al cropped

They found only one study potentially meeting the criteria but with questionable evidence and methodological issues including removal of outliers and no reporting of mean scores for each final assessment.

Learning styles, now I only ever hear them mentioned with disapproval. There is however another urban myth “the learning pyramid” that I have seen at recent conferences used as a supporting model to demonstrate the effectiveness of learning using a particular mode of instruction. So we discussed, (well tore it apart!) this too in the CPAD module.

2882821_orig

This intuitive model is attributed to the National Training Laboratories (NTL) in Bethel, Maine, USA. The laboratory respond to any requests for further information on the research with a standard email:

[The research was carried out] at our Bethel, Maine campus in the early sixties, when we were still part of the National Education Association’s Adult Education Division. Yes, we believe it to be accurate but no, we no longer have – nor can we find – the original research that supports the numbers. We get many enquiries about this every month – and many, many people have searched for the original research and have come up empty handed. We know that in 1954 a similar pyramid with slightly different numbers appeared on p. 43 of a book called Audio-Visual Methods in Teaching, published by the Edgar Dale Dryd. 

So the origins, well the above speaks for itself. The percentages are too nicely rounded off, almost impossible in scientific research and without specifying the method of measurement, the learners, the learning task casts further doubt. The pyramid is at odds and contradicts learning styles, a visual learner has a lower retention rate despite matching their preferred learning mode!

Is it a surprise then, that when I expressed an interest in chemistry education research early in my academic career that the response I got “but what about your real research?”

This was said with a positive intention I might add from a supportive Prof who was a critical friend later on when I wrote up a context-based case study.  He simply was concerned that I would lose my academic currency.  The education urban myths certainly haven’t helped our cause, fluffy non-rigorous research, that’s how many of my colleagues feel about education focused research.

 

 

Feedback and the Ryanair effect

What has Ryanair got to do with feedback in education? If you have ever had the pleasure to fly with Ryanair then you will have met the Ryanair effect! Knock airport in the west of Ireland is my destination frequently,  only 1 hour 10 mins from Luton. When the plane touches down on the runaway which was built over boggy terrain,  a familiar sound rings out – the Ryanair trumpet fanfair! The recording begins and a voice sounds

“Thank you for flying with Ryanair” followed by

“Last year, over 90 per cent of our flights arrived on time. We hope you enjoyed yours and we look forward to welcoming you on board again soon.”

Yet another on time flight! I have heard that jingle so many times, I think the association has become hard-wired into my brain! Now if Ryanair asked me in a survey,  “do our flights arrive on time?” I would find it difficult not to agree and answer positively, why? because they have reminded me so many times that their flights arrive on time.

So let’s transfer to feedback. Colleagues from my Department were feeling exasperated with student dissatisfaction on feedback in the post module questionnaire. The teaching team were giving so much quality feedback, feedback prior to activities, feedback immediately after tasks, feedback 1 week after some assessments but always delivering feedback in a timely manner according to the university regulations. There was no way to increase the feedback even if they tried!

And so they decided to have some fun with a serious intent as I like to frame it. They ordered bright pink caps with the word FEEDBACK clearly visible in white to use the following academic year. feedback-caps-2

The pink caps were worn anytime feedback was provided to the students. The students found it amusing, the staff had a giggle too, well you really couldn’t get overly serious wearing a bright pink cap! But the serious intent came to light in the student ratings at the end of the module. There was a 36% increase in the rating on feedback being timely and 26% increase in the rating on feedback being helpful.

Remember nothing changed from the previous year in relation to feedback other than highlighting when it was happening by wearing the bright pink cap. A very happy teaching team feeling justly recognised for their efforts. For me this is exactly like the Ryanair effect with their annoying yet memorable jingle.

So when it comes to feedback, do our students actually recognise feedback? Probably not unless we help them to identify it. Most students would say feedback is the comments they receive on their work. But feedback is more than that.  How can we help students to identify feedback? Small quick wins, this is definitely one of those.