Talking Feedback: Using video to radically change essay marking by Emma Mayhew

On this day, exactly two years ago, I sat in my study staring at Blackboard. 212 little green symbols were showing in Grade Centre. 212 3,500 word essays needed to be marked in the next three weeks. And they didn’t just need marks. Each of them needed a page of rich, detailed feedback, often crucial to student attainment and important to student satisfaction. In the HE sector higher student numbers and increasing student expectations look set to intensify further the pressure to deliver numerous pieces of outstanding feedback within an increasingly tighter timeframe but a tiny number of us are looking at this differently. After years of marking over 200 essays at Christmas and over 200 at Easter I finally decided to radically change the delivery of feedback to students. Encouraged by the work of the ASSET project, a few pioneers in the sector and the success of my own screencast suite, I turned to screen capture technology. In December 2013, 25 students on one of my Part 3 modules didn’t get their normal A4 feedback sheet on Grade Centre. Instead they received their own individual 6-10 minute MP4 file via Blackboard. Each video showed my face and my cursor circling essay text as I talked through their coursework in detail…


…and follow on questionnaires revealed overwhelming student support. From 20 respondents, 18 said that video feedback was better than written feedback and 17 said they would prefer video feedback next time. At least two key themes emerged from student feedback:

Clarity-Students see markers highlighting specific sections of text as they comment while face to face contact reduces scope for misunderstanding and increase the sense of individual attention.

Depth- It takes me one hour to mark and provide written feedback on a 3,500 word essay. Video feedback didn’t actually save me any time. I still spent one hour on each essay but here is the difference-my written feedback contains an average of around 350-400 words. My video feedback contains an average of around 170 words per minute so that’s around 1,360 words in a typical 8 minute video feedback recording. This is 3 to 4 times more than students would normally receive and explains why 18 questionnaire respondents said that they received much more detailed feedback than they typically would via written comments.

OK I can’t mark in my pyjamas anymore but I’m willing to sacrifice this because my small scale study suggests that using simple screen capture software to create video feedback does allow us to give much more in-depth, personal and very specific feedback at no extra cost to staff time.

For further information on how to use free and simple screen capture software to create video feedback please click on my 90 second screencast (, part of a range of 1-2 minute ‘How to’ videos on the Reading GRASS screen capture website (

Flipped learning in a team-based situation with a dash of TEL by Dr Cindy Becker

This is my new recipe for extending the academic year and helping to welcome our new students. As with any new recipe, some bits of it went really well and some aspects of it were less impressive – and there was one moment when I was in danger of failing to cook up any learning at all.

Along with my colleague Mary Morrissey, I have been working this year to introduce our new module EN1PW: Persuasive Writing. We have been ridiculously excited about the chance to share with our students all that we firmly believe they need to know about how to write practically and persuasively. We have devised a plethora of assessment tasks via blackboard (with help from Anne Crook and our other colleagues in CQSD) but I wanted to go one step further and use technology to enhance the learning experience even before our students reached the lecture hall or seminar room. Aware of the university’s desire to produce a more structured and active Welcome Week for our newcomers inspired me to create a quiz using screencasts, in the hope that students would feel part of our department’s community of learning from the off.

That was my first mistake. Because optional Part 1 modules are allocated to students on Friday of Welcome Week, I was not able to send out the quiz to the relevant students in enough time for them to use it prior to our first meeting. Lesson learned – this recipe would work better for a compulsory module.

Undeterred (I had by that time spent ages on my computer) I gave them the details of the quiz by sending out a document to them on Monday of Week 1, asking them to work through it prior to our first seminar in Week 2. (Richard Steward and I had worked hard to try to make this a bb quiz, but we could not guarantee that the screencasts would play reliably on every device a student might use, so a word document it had to be.)

The quiz consisted of 8 questions, all asking about aspects of writing with which new students struggle each year. The quiz was designed to go further than immediate learning: my idea was to use each question as a springboard to discuss other aspects of writing style. I was also keen to have them work in teams. In the seminar I asked them to get themselves into groups of four – they will remain in these groups for the rest of the term, for a variety of group-based tasks.

I went through the quiz, asking them to recall their individual answers (most had written these down on the sheet) and then decide on a group answer. That was my huge mistake: I just had not thought through in advance how to do this. Should I run through the whole quiz first, asking them to make their group choices, or run through the screencast for each question and then ask for their answers one at a time? I mistakenly chose the former option and ended up realising, too late, that it would have been more effective to have taken the latter approach. This was made more difficult because I had not thought to put the subject of each question on the question sheet, so it would have been easy to get lost had the student beside me not written the topics on her question sheet.

So, things went wrong from time to time, but generally I was pleased with the experience. I found that some of them had shown the quiz to their new flatmates, who I gather were impressed that they had been given a ‘fun’ task before the first seminar. Some of them had called home to discuss the questions. In the seminar it worked really well as a team-building task: they were so busy arguing over possible answers that they forgot to be strangers. I also realised that there were some things I would have assumed they would know which they did not. I am not sure, for example, that I would have found out that some of them were confused by prepositions if we had not been having such a free ranging discussing as a result of the quiz. I think that using animated screencasts really helped in this respect. Seeing a set of cartoons in a seminar set a tone of relaxed, discussion-based learning, which was just what I wanted to achieve.

It was all that I hoped it would be in terms of learning, and with the glitches now fixed on the question sheet I feel more confident about the teaching. I learned more about screencasts using ‘Powtoons’ software too – like the fact that each screencast will publish with a screenshot of exactly what is on the screen at the moment you press the ‘publish’ button. It took some time for me to go back and finesse all of the screencasts in the light of this, and even now I realise that I could have done it better by including an initial title screen. Still, that is the pleasure of teaching, learning and technology: there is always the next thing to learn, the next challenge to face. It is nice to think that I am learning just as hard as they are.

You can find the revised document here: EN1PW introductory quiz(2)

Online peer assessment of group work tools: yes, but which one? By Heike Bruton (a TLDF project)

A short while ago I wrote the post “Group work: sure, but what about assessment? This outlines a TLDF- funded project in which Cathy Hughes and I investigated tools for the peer assessment of group work. Cathy and I have now produced a full report, which is available for download here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02), and summarised below.


Aim and methods

The aim of the project was to evaluate available online systems for the assessment of students’ contribution to group work. In order to establish our criteria for evaluation of these systems, we conducted a series of interviews with academics across the university. This allowed us an understanding of how peer assessment (PA) is used in a range of subjects, and what the different perspectives on the requirements for a computer-based system are.


Systems in use and evaluation criteria

Among our eleven interviewees we found five different separate PA systems (including Cathy’s own system) in use by six departments. Notably, Cathy’s tool appeared to be the only entirely computer-based system. Based on the insights gained from the interviews, we developed a set of criteria against which we evaluated available PA systems. These criteria are pedagogy, flexibility, control, ease of use, incorporation of evidence, technical integration and support, and security.


Available online systems

We identified three online tools not in use at the university at the moment, which implement PA specifically to the process, not the product, of group work. These three systems are iPeer, SPARKplus and WebPA. In addition we also critically assessed Cathy’s own system, which is already being used in several departments across the university. After investigating PA systems currently in use at Reading and applying the above-named criteria to the four PA system under investigation, we came to a number of conclusions, which resulted in a recommendation.



There is a strong sense of commitment among staff to using group work in teaching and learning across the university. PA can serve as a mechanism to recognise hard work by students and also to provide feedback aimed at encouraging students’ to improve their involvement with group work. Whilst any PA system is simply a tool, which can never replace the need for active engagement by academics in their group work projects, such a tool can make PA more effective and manageable, especially for large groups.



Our recommendation then is that WebPA should be considered for use within the university. Our research suggests that it could be adopted with relative ease, particularly given the strong and active community surrounding this open-source software.   While it may not be appropriate for everyone, we believe it could be a useful tool to enhance teaching and learning, potentially improving the experience of group work assessment for both staff and students.

Cathy and I will be delivering a number of Teaching and Learning seminars on PA of group work in the near future. To download the full report, click here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02). To try out a stand-alone demo version of WebPA, follow this link:

Cathy and Heike will be presenting their project in a TEL Showcase event in the spring term. Please check

Coursework redesign for an integrated multidisciplinary module

Dr Mark Dallas, School of Chemistry, Food and Pharmacy


9239Within the School of Chemistry, Food and Pharmacy, coursework on a Part Two Pharmacy module, Therapeutics and Medicines Optimisation B (PM2B), was redesigned to reflect the multidisciplinary nature of the new module. In their assessed work, students demonstrated a better appreciation of the interconnectivity of the disciplines of Pharmacy, and students also expressed their enjoyment of the redesigned assessment.


  • Redesign coursework to reflect the multidisciplinary nature of PM2B.
  • Implement and assess a learning exercise that allows Pharmacy students to integrate their understanding of different Pharmacy disciplines.


In 2011 the General Pharmaceutical Council (GPhC), the regulating body for the pharmacy profession within England, Scotland and Wales and the body responsible for accreditation of the Masters of Pharmacy degree course at the University of Reading, adopted a new set of standards for the initial education and training of pharmacists. The first criteria of Standard 5 stressed the need for integrated curricula. With modules within Pharmacy at the University of Reading being altered to reflect these standards, the existing coursework structures were not suited, as they would not have aligned to the joint nature of the new modules.


The aims, delivery and assessment of the module’s coursework were completely redesigned. Previously, students had been assessed solely by a written report, and the datasets used only reflected one discipline of pharmacy.

The new coursework that was devised was aligned with a modern day multidisciplinary drug discovery programme, with the intention being that this would allow students to appreciate the integrative nature of pharmacy as a science, and the multidisciplinary nature of their subject.

There were four assessed components that comprised the module’s coursework. A project report contributed 50% of the coursework final mark; a poster presentation 20%; reflective diaries 15%; and engagement 15%. By having multiple types of assessment it was hoped that students would engage with the topics, and that it would promote deep learning, while allowing students an opportunity to demonstrate a variety of skill sets. The poster presentation and project reports saw students assessed as groups. To assess engagement, a rubric was created, rating students on their academic engagement and their group engagement based on clearly defined criteria.


The redesigned assessment was enjoyed by students, and in their assessed work students consistently demonstrated a sufficient understanding of the interconnectivity of the disciplines of Pharmacy. Marks on the written report, however, were lower than had been hoped, and suggested that some adjustment to this aspect of assessment were necessary.


Having the coursework comprise different assessment types was valuable as it allowed staff to gain an insight into student knowledge retention, critical thinking, and their ability to work in a wider context.

The written report represented an assessment format with which students would be familiar, given the format of assessments at Part One. The value of having students produce a written report was that it allowed students to be tested on their application, rather than simple obtainment, of knowledge to address a problem.

Having students produce a poster presentation as part of their assessment on the module encouraged students to utilise different skills in their work. Communication skills, which had previously not been assessed in the module, but are an important skill that the University of Reading seeks to develop in its graduates, became a central element of assessment. By having to produce a poster that would then be presented to their peers, students were encouraged to engage deeply with the topic, and to take pride in what they created, and created an opportunity for peer learning.

Having group work as an assessed element was of great value in a multidisciplinary module. With group members having different strengths within the group, they are able to make a valuable contribution, and benefit from learning from others’ strengths in turn. While group work does introduce the possibility for ‘free-riding’, whereby students do not engage and instead rely on the rest of the group to deliver a good final mark, and this was something that students commented on in their feedback, the strength of group assessment is the key communication and collaborative skills it demands.

The creation of reflective diaries is especially pertinent to students in healthcare professions, as reflective writing is a central element of their continuing professional development. An additional and unforeseen benefit of this assessment was the insight it provided into students’ thought processes, which was valuable for making adjustments to the module.

Assessing student engagement was one of the challenging aspects of the redesigned assessment. Having a clear rubric made marking a more objective process.

Follow up

To address the issues that were introduced by having group work assessed, a session on group dynamics has been introduced to the module in order to better set expectations. The skills addressed in this session will be valuable to students not only in this module, but can also be applied across their academic and professional experience.

A further innovation has been the use of online project management tools. This has both allowed students to better manage their work and engagement, and also allows assessors access to evidence to help with marking, and allows group work to be better monitored.

Flipping assessment?! by Dr Karen Ayres

Like many colleagues, I have attended a number of interesting talks on the ‘flipped classroom’ approach, whereby, in a role reversal, the main delivery of information takes place outside of the classroom, and the contact time is used instead for reinforcing learning. I haven’t quite identified yet how I can make use of this approach in my own teaching, but I have been inspired to try ‘flipping’ an assessment in one of my modules. Admittedly this may be the wrong terminology to use here, but what I mean by this is a role reversal when it comes to assessment. In one of my modules this year, instead of asking students to produce a guide on using a statistics computing package, which I would usually then assess for clarity, accuracy and effectiveness as a training resource, I instead provided students with a piece of work I had created (with deliberate errors and other problems!) and asked them to assess it as if they were the lecturer.

The approach of engaging students in marking is of course not new, since peer marking is used by many lecturers. However, this was not a standard peer marking exercise, because I did not provide them with a marking scheme, nor a set of solutions to use. I left it to the students to decide how they wanted to split up the 100 marks, and what they wanted to award marks for. By doing it this way, my aim was to see whether they knew what the key elements of an effective training guide was, by showing how they thought one should be marked. They were also asked to provide effective feedback on the work, on the understanding that feedback should be constructive and should benefit learning, and that the feedback should justify the mark they awarded (I didn’t use the term ‘feed-forward’, but did ask them to consider what they would find useful if the work being commented on was their own). My aim here was to determine whether they understood how the key elements of an effective training guide should be put into practice, and also to see if they were able to identify technical inaccuracies in the work. It is this last point which I feel the flipped assessment approach may be particularly beneficial for. Often students may misunderstand something but not include it in their own piece of work, meaning that this misunderstanding escapes identification. By asking that they mark work which includes errors, and by requiring that they give feedback about why it’s an error, I feel that I’m demanding a deeper level of subject knowledge from them than I would be doing in a traditional assignment. Of course, it’s then important that I go through these errors with them afterwards, to make sure that no misunderstandings have been created!

I’m pleased to report that I was very impressed with what my students did on this assignment (obviously I had to assess their assessment!). It was a group assignment, and all groups produced a very detailed marking scheme, in a grid layout – I hadn’t given them any pointers on this, so the fact that they decided to do it like this was encouraging. The written feedback that they provided on the script they were given was similarly impressive, and in some cases of the same standard that my colleagues and I routinely provide. What was more interesting was the fact that alongside their various annotations on the script, they provided a separate, very detailed, document listing errors and issues with the work, including further feed-forward comments. If students all expect this multiple level of detailed feedback on their own work as standard, this might explain why some are unhappy with the (still reasonably detailed) feedback they do receive!

In summary, my aim in designing an assessment in a ‘flipped’ way was to encourage a deeper level of thought, and to assess a deeper level of understanding, than I felt was achieved by the usual approach. I feel that those who are tasked with assessing the knowledge and learning of others need to have a deeper than usual understanding of both the technical and communication sides of the discipline (certainly in mathematics and statistics). After the success of this trial run I will definitely be looking at how else I can use this different type of assessment in my other modules. My next step is to consider how to use something like this for a quantitative assignment, for example by asking them to both produce their own set of solutions with marking scheme, and then to use them to mark my piece of work that I submit to them for assessment!

I-TUTOR: Intelligent Tutoring for Lifelong Learning

The University of Reading is a project partner in a prestigious project to develop a multi-agent based intelligent tutoring system to support online teachers, trainers, tutors and learners: I-TUTOR.

I-TUTOR, which stands for Intelligent Tutoring for Lifelong Learning is to be applied in open source learning environments, and will monitor, track, analyze and give formative assessment and feedback to students within the learning environment while giving input to tutors and teachers involved in distance learning to enhance their role during the process of teaching. Find out more on the project blog and website at

Funded with support from the European Commission, the project started in January 2012 and is a partnership between the University of Macerata as coordinating institution, and the University of Palermo, University of Reading, Budapest University of Technology and Economics, ITEC, Militos, and Eden.

Reaching the midterm in the project, the partnership has published its first project newsletter to share the results achieved – including a comprehensive study of intelligent tutoring systems as well as an open source code for a survey chatbot that anyone is welcome to test.

The team would welcome any feedback and suggestions. To find out more, or let them know what you think, contact Karsten Lundqvist, Lecturer in the School of Systems Engineering here at Reading.

Using technology to find low-tech solutions by Mary Morrissey

Like a lot of people, I do not consider myself particularly savvy about technology: when I find that something is useful to me, I learn how to use it. That said, I think we can use learning technologies to come up with ‘low tech’ solutions to our teaching needs. Among the advantage is efficiency in terms of time and money: we already have the kit, and we know how to use it. I offer the following as an example.

It is often difficult to make sure that students are aware of detailed regulations that affect their work but which cannot be summarised or displayed easily. Conventions for writing and referencing are a good example in our department.  Last summer, Pat Ferguson (our Royal Literary Fund fellow whose role in the department is to help student improve their writing skills) observed that we had excellent advice on essay writing, but it was in our large Student Handbook, distributed at the start of the first year. Pat suggested that we make this information available separately.

I thought this was a great idea. I noticed there was other information in the handbook that students need through their degree too: there was information about our marking criteria; there were some very helpful examples that showed the difference between plagiarism and poor academic practice. I took these sections, and I created three separate documents with titles that I hoped would be self-explanatory: ‘Style Guide for English Literature students’; ‘Understanding Feedback – Marking Criteria’; and ‘Plagiarism’.

I uploaded all three documents to Blackboard’s ‘Fileshare’ area for the department, and I created links from the Blackboard courses for all our Part 1 and Part 2 modules. (I am working on the Part 3 modules, but there are over 50 of those!) I also posted the documents in our central ‘Information for English Literature Students’ Blackboard organisation, on which all staff, undergraduates and postgraduate students are enrolled. By keeping the documents in ‘Fileshare’ I can update them every year, to include new ‘standard paragraphs’ for example. I overwrite the old file with the newer version, and all the daughter versions linked to it update automatically.

This isn’t rocket science, but I think it has helped us make useful information more readily available. Having in posted in most of our Blackboard courses makes it more visible; having three small documents (in pdf format) makes them easier to download and print.

Where would I go from here? Students have told me that they like a website with exercises that help with grammar and writing skills that we recommended. It’s based in the University of Bristol:

I would like to create an interactive resource like this, and I know it can be done. The University of Aberdeen took the paper-based ‘Guide to Written Work’ (on which we all relied when I worked there!) and turned it into an internet-based resource with exercises:

If anyone knows any low-tech ways that I could do something similar, please let me know!