Adrian Aronsson-Storrier – School of Law
Watch Adrian’s 5 minute video case study about how he used personal capture (the Mediasite tool) to record a lecture cancelled due to bad weather during the personal capture pilot project, 2018-19.
Adrian Aronsson-Storrier – School of Law
Watch Adrian’s 5 minute video case study about how he used personal capture (the Mediasite tool) to record a lecture cancelled due to bad weather during the personal capture pilot project, 2018-19.
Sue Blackett – Henley Business School, 2018-19
I participated in the university’s Personal Capture pilot as a Champion for my school to trial the Mediasite tool to create screen cast videos for use in teaching and learning. My aim was to help PGT students get to grips with key elements of the module. The videos facilitated students in repeatedly viewing content with the aim of increasing engagement with the module. Some videos were watched multiple times at different points throughout the term indicating that information needed to be refreshed.
The target audience was students on ACM003 Management Accounting Theory & Practice, a postgraduate course where 91% of students were native Mandarin speakers. English language skills were an issue for some students, so capture video provided opportunities for students to re-watch and get to grips with the content at their leisure. In addition, I wanted to free up class contact time so I could focus on content in areas that had been more challenging on the previous run of the module. Also, by using different colours and font sizes on the PowerPoint slides, the visual emphasis of key points reinforced the accompanying audio.
The first video recorded was a welcome to the module video (slides and audio only) that covered the module administration i.e. an overview of module, outline of assessment, key dates, module text book etc. The content for the video was relatively straightforward as it was taken out of the first lecture’s slides. By isolating module admin information, more information could be added e.g. mapping assessable learning outcomes to assessments and explaining the purpose of each type of assessment. In first recording the video, I did not follow a script as I was trying to make my delivery sound more natural. Instead, I made short notes on slides that needed extra information and printed off the presentation as slides with notes. As this is the same strategy that I use to deliver lectures, I was less concerned about being “audio ready” i.e. not making errors in my voice recording.
In the second and third videos (coursework feedback and exam revision advice), I included video of myself delivering the presentations. As the recordings were made in my home office, additional visual matters had to be considered. These included: what I was wearing, the background behind me, looking into the camera, turning pages, etc. The second attempts of each recording were much more fluent and therefore uploaded to Blackboard.
The last two recordings were quite different in nature. The coursework feedback used visuals of bar charts and tables to communicate statistics accompanied by audio that focused on qualitative feedback. The exam revision video used lots narrative bullet points.
Examples of my videos:
Welcome to module: https://uor.mediasite.com/Mediasite/Play/7a7f676595c84507aa31aafe994f2f071d
Assessed coursework feedback: https://uor.mediasite.com/Mediasite/Play/077e974725f44cc8b0debd6361aaaba71d
Exam revision advice: https://uor.mediasite.com/Mediasite/Play/94e4156753c848dbafc3b5e75a9c3d441d
Resit Exam Advice: https://uor.mediasite.com/Mediasite/Play/e8b88b44a7724c5aa4ef8def412c22fd1d
The welcome video did have impact as it was the only source of information about the administration for the course. When students arrived at the first class with the text book, this indicated that they had been able to access the information they needed to prepare for the course. Student response to the personal capture pilot project questionnaire was low (18%), however, the general feedback was that the videos were useful in supporting them during the course.
Analysis of analytics via MediaSite and Blackboard provided some very interesting insights:
Review of video along with watching trends showed that students skipped through the videos to the points where slides changed. This suggested that the majority were reading the slides rather than listening to the accompanying commentary which contained supplementary information.
As no student failed to meet the admin expectations of the course, those that had not watched the video must have been informed by those who had.
The analytics were most illuminating. Me appearing in videos was supposed to establish bonds with the cohort and increase engagement, however, my appearance seemed to be irrelevant as the students were focused on reading rather than listening. This could have been due to weaker listening skills but also highlights that students might think that all important information is written down rather than spoken.
Videos with graphics were more watched than those without so my challenge will be to think about what content I include in slides i.e. more graphics with fewer words and/or narrative slides with no audio.
I will continue with capture videos, however, I will do more to test their effectiveness, for example I will design in-class quizzes using Kahoot, Mentimeter, etc. to test whether the content of the videos has been internalised.
I’ve become much quicker at designing the PowerPoint content and less worried about stumbling or searching for the right words to use. I have been able to edit videos more quickly e.g. cutting out excessive time, cropping the end of the video. Embedding videos in Blackboard has also become easier the more I’ve done it. The support information was good, however, I faced a multitude of problems that IT Support had to help me with, which, if I’m honest, was putting me off using the tool (I’m a Mac user mostly using this tool off campus).
Name/School/ Email address
Amanda Millmore / School of Law / email@example.com
Faced with double-teaching a cohort of 480 students (plus an additional 30 in University of Reading Malaysia), I was concerned to ensure that students in each lecture group had a similar teaching experience. My solution was to “flip” some of the learning, by recording short video lectures covering content that I would otherwise have lectured live and to use the time freed up to slow the pace and instigate active learning within the lectures. Students provided overwhelmingly positive feedback in formal and informal module evaluations, the introduction of flipped learning has aided the welfare of students, allowing those who are absent or who have disabilities or language barriers to revisit material as and when needed. For staff, it has aided the reduction in my workload and has the ongoing benefit of reducing workload of colleagues who have taken over teaching the module.
The module “General Introduction to Law” is a “lecture only” first year undergraduate module, which is mandatory for many non-law students, covering unfamiliar legal concepts. Whilst I have previously tried to introduce some active learning into these lectures, I have struggled with time constraints due to the sheer volume of compulsory material to be covered.
Student feedback requested more support in tackling legal problem questions, I wanted to assist students and needed to free up some space within the lectures to do this and “flipping” some of the content by creating videos seemed to offer a solution.
As many academics (Berrett, 2012; Schaffzin, 2016) have noted, there is more to flipping than merely moving lectures online, it is about a change of pedagogical approach.
I sought initial support from the TEL (Technology Enhanced Learning) team, who were very happy to give advice about technology options. I selected the free Screencast-O-Matic software, which was simple to use with minimal equipment (a headset with microphone plugged into my computer).
I recorded 8 short videos, which were screencasts of some of my lecture slides with my narration; 6 were traditional lecture content and 2 were problem solving advice and modelling an exemplar problem question and answer (which I had previously offered as straightforward read-only documents on Blackboard).
The software that I used restricted me to 15 minute videos, which worked well for maintaining student attention. My screencast videos were embedded within the Blackboard module and could also be viewed directly on the internet https://screencast-o-matic.com/u/iIMC/AmandaMillmoreGeneralIntroductiontoLaw.
I reminded students to watch the videos via email and during the lectures, and I was able to track the number of views of each video, which enabled me to prompt students if levels of viewing were lower than I expected.
By moving some of the content delivery online I was also able to incorporate more problem-solving tasks into the live lectures. I was able to slow the pace and to invite dialogue, often by using technology enhanced learning. For example, I devoted an hour to tackling an exam-style problem, with students actively working to solve the problem using the knowledge gained via the flipped learning videos and previous live lectures. I used the applications Mentimeter, Socrative and Kahoot to interact with the students, asking them multiple-choice questions, encouraging them to vote on questions and to create word clouds of their initial thoughts on tackling problem questions as we progressed.
I evaluated reaction to the module using the usual formal and informal module evaluations. I also tracked engagement with the videos and actively used these figures to prompt students if views were lower than expected. I monitored attendance to modules and didn’t notice any drop-off in attendance. Finally, I reviewed end of year results to assess impact on students results.
Student feedback, about the videos and problem solving, was overwhelmingly positive in both formal and informal module evaluations.
Videos can be of assistance if a student is absent, has a disability or wishes to revisit the material. Sankoff (2014) and Billings-Gagliardi and Mazor (2007) dismiss concerns about reduced student attendance due to online material, and this was borne out by my experience, with no noticeable drop-off in numbers attending lectures; I interpret this as a positive sign of student satisfaction. The videos worked to supplement the live lectures rather than replace them.
There is a clear, positive impact on my own workload and that of my colleagues. Whilst I am no longer teaching on this module, my successor has been able to use my videos again in her teaching, thereby reducing her own workload. I have also been able to re-use some of the videos in other modules.
Whilst flipped learning is intensive to plan, create and execute, the ability to re-use the videos in multiple modules is a huge advantage; short videos are simple to re-record if, and when, updating is required.
My initial concern that students would not watch the videos was utterly misplaced. Each video has had in excess of 1200 views (and one video has exceeded 2500). Some of the material was only covered by the flipped learning videos, and still appeared within the examination; students who tackled those questions did equally well as those answering questions covering content which was given via live lecture, but those questions were less popular (2017/18 examination).
I was conscious that there may be some students who would just ignore the videos, thereby missing out on chunks of the syllabus, I tried to mitigate this by running quizzes during lectures on the recorded material, and offering banks of multiple choice questions (MCQs) on Blackboard for students to test their knowledge (aligned to the summative examination which included a multiple choice section). In addition, I clearly signposted the importance of the video recorded material by email, on the Blackboard page and orally and emphasised that it would form part of the final examination and could not be ignored.
My experience echoes that of Schaffzin’s study (2016) monitoring impact, which showed no statistical significance in law results having instituted flipped learning, although she felt that it was a more positive teaching method. Examination results for the module in the end of year summative assessment (100% examination) were broadly consistent with the results in previous academic years, but student satisfaction was higher, with positive feedback about the use of videos and active learning activities.
Since creating the flipped learning videos another colleague has taken over as convenor and continued to use the videos I created. Some of the videos have also been able to be used in other modules. I have used screencast videos in another non-law module, and also used them as introductory material for a large core Part 1 Law module. Student feedback in module evaluations praised the additional material. One evolution in another module was that when I ran out of time to cover working through a past exam question within a lecture, I created a quick screencast which finished off the topic for students; I felt that it was better to go at a more sensible pace in the lecture and use the screencast rather than rush through the material.
Michelle Johnson, Module Convenor 2018-2019 commented that:
“I have continued to use and expand the flipped learning initiative as part of the module and have incorporated further screencasts into the module in relation to the contract law content delivered. This allowed for additional time on the module to conduct a peer-assessment exercise focussed on increasing the students’ direct familiarity with exam questions and also crucially the marking criteria that would be used to score their Summer exams. Students continue to be very positive about the incorporation of flipped learning material on the module and I feel strongly that it allowed the students to review the more basic introductory content prior to lectures, this allowing time for a deeper engagement with the more challenging aspects of the lectures during lecture time. This seemed to improve students understanding of the topics more broadly, allowing them to revisit material whenever they needed and in a more targeted way than a simple lecture recording.”
TQ1, LE1, SO3
University of Reading TEL advice about personal capture – https://sites.reading.ac.uk/tel-support/category/learning-capture/personal-capture
Berrett, D. (2012). How “Flipping” the Classroom Can Improve the Traditional Lecture. – https://www.chronicle.com/article/how-flipping-the-classroom/130857. Chronicle of Higher Education..
Billings-Gagliardi, S and Mazor, K. (2007) Student decisions about lecture attendance: do electronic course materials matter?. Academic Medicine: Journal of the Association of American Medical Colleges, 82(10), S73-S76.
Sankoff, P. (2014) Taking the Instruction of Law outside the Lecture Hall: How the Flipped Classroom Can Make Learning More Productive and Enjoyable (for Professors and Students), 51, Alberta Law Review, pp.891-906.
Schaffzin, K. (2016) Learning Outcomes in a Flipped Classroom: A comparison of Civil Procedure II Test Scores between Students in a Traditional Class and a Flipped Class, University of Memphis Law Review, 46, pp. 661.
Professor Andrew Wade is responsible for research in hydrology, focused on water pollution, and Undergraduate and Postgraduate Teaching, including Hydrological Processes
Colleagues within the School of Archaeology, Geography and Environmental Sciences (SAGES) have been aware of the University’s broader ambition to move towards online submission, feedback and grading where possible. Many had already made the change from paper based to online practices and others felt that they would like the opportunity to explore new ways of providing marks and feedback to see if handling the process online led to a better experience for both staff and students.
In Summer 2017 it was agreed that SAGES would become one of the Early Adopter Schools working with the EMA Programme. This meant that the e Submission, Feedback and Grading work stream within the Programme worked very closely with both academic and professional colleagues within the School from June 2017 onwards. This was in order to support all aspects of a change from offline to online marking and broader processes for all coursework except where there was a clear practical reason not to, for example, field note-books.
I had started marking online in 2016-2017 so was familiar with some aspects of marking tools and some of the broader processes.
My Part 2 module, GV2HY Hydrological Processes, involves students producing a report containing two sections. Part A focuses on a series of short answers based on practical-class experiences and Part B requires students to write a short essay. I was keen to use all of the functionality of Grademark/Turnitin during the marking process so I spent time creating my own personalised QuickMark bank so that I could simply pull across commonly used feedback phrases and marks against each specific question. This function was particularly useful to use when marking Part A. I could pull across QuickMarks showing the mark and then, in the same comment, explain why the question received, for example, 2 out of a possible 4 marks. It was especially helpful that my School sent around a discipline specific set of QuickMarks created by a colleagues. We could then pull the whole set or just particular QuickMarks into our own personalised set if we wanted to. This reduced the time spend on personalising and meant that the quality of my own set was improved further.
I also wanted to explore the usefulness of rubric grids as one way to provide feedback on the essay content in Part B of the assignment. A discipline specific example rubric grid was created by the School and send around to colleagues as a starting point. We could then amend this rubric to fit our specific assessment or, more generally, our modules and programmes. The personalised rubrics were attached to assignments using a simple process led by administrative colleagues. When marking I would highlight the level of performance achieved by each student, against each criteria by simply highlighting the box in blue. This rubric grid was used alongside both QuickMarks and in text comments in the essay. More specific comments were given in the blank free text box to the right of the screen.
Unfortunately module evaluation questionnaires were distributed and completed before students received feedback on their assignments so the student reaction to online feedback using QuickMarks, in text comments, free text comments and rubrics was not captured.
In terms of the impact on the marker experience, after spending some initial time getting my personal Quickmarks library right and amending the rubric example to fit with my module, I found marking online easier and quicker than marking on paper.
In addition to this, I also found that the use of rubrics helped to ensure standardisation. I felt comfortable that my students were receiving similar amounts of feedback and that this feedback was consistent across the cohort and when returning to marking the coursework after a break. When moderating coursework, I tend to find more consistent marking when colleagues have used a rubric.
I also felt that students received more feedback than they usually might but am conscious of the risk that they that drown in the detail. I try to use the free text boxes to provide a useful overall summary to avoid overuse of QuickMarks.
I don’t worry now about carrying large amounts of paper around or securing the work when I take assignments home. I also don’t need to worry about whether the work I’m marking has been submitted after the deadline – under the new processes established in SAGES, Support Centre colleagues deduct marks for late submission.
I do tend to provide my cohorts with a short piece of generic feedback, including an indicator of how the group performed-showing the percentage of students who had attained a mark in each class. I could easily access this information from Grademark/Turnitin.
I’m also still able to work through the feedback received by my Personal Tutees. I arrange individual sessions with them, they access ‘My Grades’ on Blackboard during this meeting and we work through the feedback together.
One issue was that, because the setting were set up in a particular way, students could access their feedback as soon as we had finished writing it. This issue was identified quickly and the settings were changed.
My use of online marking has been successful and straightforward but my experience has been helped very significantly by the availability of two screens in my office. These had already been provided by School but became absolutely essential. Although I largely mark in my office on campus, when I mark from home I set up two laptops next to each other to replicate having two screens. This set up allows me to be able to check the student’s work on one screen whilst keeping their coursework on the other.
One further area of note is that the process of actually creating a rubric prompted a degree of reflection over what we actually want to see from students against each criteria and at different levels. This was particularly true around the grade classification boundaries-what is the different between a high 2:2 and a low 2:1 in terms of each of the criteria we mark against and how can we describe these differences in the descriptor boxes in a rubric grid so that students can understand.
This process of trying to make full use of all of the functions within our marking tools has led to some reflection surrounding criteria, what we want to see and how we might describe this to students.
For more information on the creation and use of rubrics within Grademark/Turnitin please see the Technology Enhanced Learning Blog pages here:
Colleagues within the IFP wanted to improve the student assessment experience. In particular we wanted to make the end to end process quicker and easier and reduce printing costs for students. We also wanted to offer some consistency with undergraduate programmes. This was particularly important for those students who stay in Reading after their foundation year to undertake an undergraduate degree. We were also keen to discover if there would be any additional benefits or challenges which we had not anticipated.
No IFP modules had adopted online submission, grading and feedback until Spring 2015. We were aware of a number of departments successfully running online assessment within the University and the broader move towards electronic management of assessment within the sector as a whole. We introduced online assessment for all written assignments, including work containing pictures and diagrams, onto the IFP module ‘Politics’ (PO0POL) and ‘Sociology’ (PO0SOC) in 2015.
We made the decision very early in the process that we would use Turnitin Grademark within Blackboard Gradecenter. This was consistent with existing use in the Department of Politics.
We created a set of bespoke instructions for students to follow when submitting their work and when viewing their feedback. These instructions were based on those provided by the Technology Enhanced Learning Team but adjusted to fit our specific audience. These were distributed in hard copy and we spent some time in class reviewing the
process well before the first submission date.
Submission areas in Blackboard and standard feedback rubric sections were created by the Departmental Administrator who was already highly experienced.
Overall the end to end assessment process did become easier for students. They didn’t have to travel to campus to submit their assignments and they enjoyed instant access to Turnitin.
Turnitin itself became a very useful learning tool for pre degree foundation students. It not only provided initial feedback on their work but prompted a dialogue with the marker before work was finally submitted. For students right at the start of their university experience this was extremely useful.
It was equally useful to automate deadlines. Students very clearly understood the exact time of the deadline. The marker was external to this process allowing them to adopt a more neutral position. This was more transparent than manual systems and ensured a visibly consistent experience for all students.
In addition to this, because students did not have to print out their assignments, they became much more likely to include pictures and diagrams to illustrate their work. This often improved the quality of submission.
All students uploaded their essays without any additional help. A small number also wanted to upload their own PowerPoint presentations of their in class presentations at the same time which meant that we needed to work through the difficulty of uploading two files under one submission point.
Moving to online assessment presented a number of further challenges. In particular, we became aware that not all students were accessing their feedback. Arranging online access for external examiners in order to moderate the work presented a final challenge. We then worked to address both of these issues.
It would be really helpful to explore the student experience in more depth. One way to do this would be to include a section specifically focused on feedback within IFP module evaluation forms.
In the future we would like to make use of the audio feedback tool within Gradecenter. This will maximise the experience of international
students and their chances of developing language skills.
Within the department, I teach primarily in Early Modern and Old English. For more details of my teaching please see Mary Morrissey Teaching and Convening
My primary research subject is Reformation literature, particularly from London. I am particularly interested in Paul’s Cross, the most important public pulpit in sixteenth and seventeenth-century England. I retain an interested in early modern women writers, with a particular focus on women writers’ use of theological arguments. Further details of my research activities can be found at Mary Morrissey Research
A number of modules within the Department of English Literature began using GradeMark as a new marking tool in the Autumn of 2015. I wanted to explore the use of the new QuickMarks function as a way of enhancing the quality of the feedback provided to our students and ensuring the ‘feedback loop’ from general advice on essay writing to the feedback on particular pieces of assessed work was completed.
The Department developed extensive guidance on writing skills to support student assessment: this includes advice on structuring an argument as well as guidance on grammar and citations. This guide was housed on departmental handbooks and in the assignments folder in Blackboard. There was considerable concern that this resource was underused by students. We did know that the QuickMarks function was being used as part of our online feedback provision and that it was possible to personalise the comments we were using and to add links to those comments as a way of providing additional explanation to students.
In order to allow relevant sections of the essay writing style guide to be accessed via QuickMarks I copied the document into a Google Doc, divided each section by using Google Doc bookmarks and assigned each bookmark an individual URL link. I then used Bitly.com to shorten the URL link assigned to each section by the Google Doc to make it more useable. I then created a set of Quickmarks that included these links to the Style Guide. In this way, students had direct access to the relevant section of the Guide while reading their feedback. So if a student hadn’t adopted the correct referencing format (the Modern Humanities Research Association style in the case of English Literature) the marker would pull a QuickMark across to the relevant point of the essay. When the student hovered over this comment bubble, they would see the text within it but were also able to click on the URL taking them directly to page 7 of the departmental writing style guide on MHRA citation and referencing. If other colleagues wanted to start adopting the same approach, I simply exported the QuickMark set to them which they incorporated into their own QuickMarks bank within seconds.
The Bitly.com tool, used to shorten the URL link, monitored the usage of each link included in our QuickMarks. This showed us how many times and on which date each individual link was used.
To complement this data I also ran a survey on the student response to online marking and feedback. 35 undergraduate students responded. This showed that students found feedback most useful when it came in forms that were familiar from paper marking, like general comments on the essay and marginal comments throughout the essay. Less familiar types of feedback (links to web-resources included in bubble comments accessed by hovering the cursor) were often missed. In the survey, 28 out of 35 students said that they did not receive any links to the writing style guide within their QuickMark comments even though more than this did receive them. 3 students did not click on the links. Of the 5 remaining students who did make use of the links, 3 responded positively, mentioning their value in terms of improving their writing skills:
“It was good to refer to alongside my work”
“They helped me to strengthen my writing overall”
“Yes motivational to actually look at them-whereas on a paper copy you might read he comment and forget but here you can click straight through so much easier!”
Some of the new functions available to us on GradeMark allow us to improve our feedback. We shouldn’t just be using online marking tools to replicate existing off line marking processes. We can go much further! But if this is going to be successful it is really important to inform students about the range of options that online marking makes available so that they make the most of the systems we use.
Once we do this effectively, we can then explore other options. In English Literature, we are keen to ensure that our Department style guide is used effectively. But there are many other web resources to which we could link through Quickmarks: screencast essay writing guides in Politics and IWLP, as well as the new Academic Integrity toolkit by Study Advice, for example.
By including links within QuickMark comments we help to move students towards greater levels of assessment literacy.
I have been teaching English for over 15 years. I worked on EFL courses in Russia and the UK between 2000 – 2012. I started teaching English for Academic Purposes in 2013 when I joined the International Foundation Programme at the University of Surrey. I have been working as an EAP tutor at the University of Reading since 2014, first on the International Foundation Programme and now on the Pre-sessional English Programme. I have recently become part of the assessment group within ISLI, which creates and administers tests of EAP.
• To familiarise Foundation level students with e-assessment practices as part of their preparation for Undergraduate Courses at UoR
• To simplify assessment administration procedure for multiple module subgroups with varied deadlines on a 20-credit module
• To reduce the marking workload associated with paper submissions
• To deliver more timely and accessible feedback to students
The International Foundation Programme has a 15-module portfolio delivered by various UoR departments. International Students joining the course have to manage multiple assessment deadlines and follow academic assessment practices used within the departments delivering their core modules. In order to support them, IFP runs a 20-credit Academic Skills module taught over 2 hours per week and assessed through a combination of formative and summative oral and written assignments marked off-line. A combination of word documents, excel spreadsheet and online RISIS reports are used for assessment data administration. When I joined the programme in 2015, the team were looking for ways of:
• optimising the administration of a large volume of paper submissions with multiple sub-group deadlines
• reducing the tutors’ marking workload & simplifying the assessment data entry process
• gauging the level of learners’ engagement with feedback
• Having previously used electronic marking tools, I was keen to introduce them on the IFP. With the Module Convenor’s support, I started trialling the Turnitin e-submission and grading tools with my sub-group in spring 2015. It was agreed that a formative assessment piece would be suitable for the trial to allow space for an error and that learner training could be integrated into the module syllabus as part of developing the students’ referencing and source integration skills. There were 3 classroom demonstrations: how to submit work, how to check originality reports and how to access electronic feedback. Learners were also signposted to the learner training resources available on Blackboard. Some students requested further guidance and were supported through a peer-led demonstrations in subsequent lessons. The fact that most students managed the e-submission with minimal training was an encouraging start.
• For the purposes of maintaining consistency in feedback delivery with other module subgroups I created a QuickMarks set based on the existing module error correction codes that all of the tutors used and hyper-linked them to the online practice materials we normally recommend to students when suggesting areas for improvement. I also uploaded our mark scheme as a Turnitin rubric. Similarly, I provided global feedback comments on submitted work. The only difference in the feedback delivery was its online mode and the fact that QuickMarks were associated with one of the 5 assessment criteria such as “organization” or “task completion”, hopefully making the rationale behind the grading more explicit.
• Students reacted favourably to receiving electronic feedback, saying that they liked having instant access to their grades through “My Grades” feature and that word-processed comments were easier to understand for international students than handwritten ones. They also like the fact that QuickMarks we use are hyperlinked to external practice materials. This allows them to work independently. For example, a comment on referencing issues is linked to the referencing guidelines page.
• Interestingly, the electronic assignment inbox showed that the students’ level of engagement with feedback varied: some viewed the marks but did not access the detailed feedback; others read the comments but did not explore the hyperlinks. This has prompted us to run follow-up tutorials that students have to prepare for using tutor’s feedback. Overall, the trial was largely successful but highlighted the need for some more learner training in how to process e-feedback.
• Because the online marking procedure used with the trial group was largely replicating our existing off-line marking procedure in a less time-consuming way, other module tutors were keen to experiment with e-assessments. The Programme Director and the Module Convenor were very supportive and allowed me to spend time on one-to-one consultations with team members in order to demonstrate the benefits of using e-assessment tools and train them if they wished to trial them.
• Over the next couple of terms it was decided to introduce e- submission for all written coursework assignment in order to optimise the administration process. However, tutors were allowed the flexibility of marking online or downloading e- submissions in order to mark them in Word or print papers. This approach met our staff training needs and working styles.
The challenge at this stage was that the e-feedback and grades had to be transferred into the official feedback forms and spreadsheets for consistency purposes. In order to avoid multiple data entry, we decided to start using the Turnitin rubric and the Blackboard Grade centre. Creating a Turnitin rubric was easy and eliminated the need for calculating grades in excel documents and transferring them to a master spreadsheet. We have not moved away from excel documents completely but have significantly reduced the manual data entry load.
• By autumn 2016 all Academic Skills written assignments were submitted and graded online
• Students find the new submission procedure, with a single submission point and an electronic receipt system, easier to follow.
• Many IFP students have used the opportunity to submit work remotely while visiting their families abroad during holidays.
• Many students are using Turnitin Originality reports as a formative learning tool that helps them see how well they have paraphrased or referenced source material and revise their drafts independently more, which has resulted in fewer cases of unintentional plagiarism.
• There is a greater transparency to learners as to how their mark was arrived at because they can see the number and type of QuickMarks comments that are associated with each criterion their work has been graded on.
• Generally, they now view e-submission and feedback as part of the daily university activities, which prepares them for the reality of the academic studies on their future degree courses.
Effect on the Tutors
• Using e-submission has decreased the burden of assessment administration: instead of sorting large volumes of student papers into sub-groups manually tutors use GradeCentre SmartViews to filter out their students’ submissions.
• Non-submitters are identified and sent a reminder earlier. In the past non-submitters could only be identified after the anonymous marking process has been completed, which often resulted in a hefty penalty. Now a tutor or the module convenor uses the “e-mail non-submitters” button right after the deadline to chase the students (even if marking is anonymous). As a result, students who failed to submit their assignment or uploaded to the wrong submission point receive an early reminder. For many IFP students, it is a learning curve and getting an early reminder helps them.
• Marking has become easier with Turnitin: tutors can manage the 15 days turnaround time better because they can start marking straight after the deadline and not have to wait until the printed copies are distributed. Many find QuickMarks hyperlinked to external practice or reference materials helpful as a way to feed forward without giving a lengthy explanation. Some tutors reported being slowed down by internet connection issues. It also took us some time to adjust to the
need Feedback Studio Interface.
• Using electronic assessment tools has also prompted a professional dialogue about our current assessment practices and highlighted the need for protocols on e-submissions, e- moderation and external examining. So it is great news that such guidelines are being developed as part of the EMA work.
• We have gained a better overview of IFP students’ engagement levels because GradeMark allows us to identify and contact non-submitters at one click. It also shows us the number of submission attempts and whether students have accessed feedback prior to tutorials. This helps us to support at risk students better.
• The module convenor has gained a better real-time overview of the marking process: number of scripts marked so far, marking analytics (average, standard deviation, range), all displayed in the GradeCentre column statistics. This has allowed the module convenor to support the tutors by re-distributing scripts or helping to mark and moderate.
• The module convenor can also see how much feedback is given to students across the board, which is important for quality assurance purposes.
• Dealing with possible cases of academic misconduct and late submissions has become easier thanks to Turnitin originality reports and electronic receipt system.
• Our team’s experience has shown that it is worthwhile trying to integrate electronic assessment literacy into the course syllabus. It would also be great if there were university-wide learner-training sessions, similar to CQSD sessions offered to staff.
• Moving our module toward e-assessment was manageable
because our approach to electronic tools has been selective: where our current assessment practices worked well, we only sought to replicate them. When a change was needed, we looked for ways technology could be used to implement it.
• Sharing best practice and providing peer support has proven to be a good way of encouraging more colleagues to use e- assessment tools, because it was not perceived as a top-down driven change.
• Having the programme management support has really helped our small community of e-practitioners to grow. Creating training opportunities and allowing some flexibility during the transition to e-practice have been key to its success. There was a point when our exploratory e-assessment practices needed to be more standardized and programme level decisions were key to maintaining consistency of practice.
Following the successful trial of the e-assessment tools on the Academic Skills and International English Modules, the programme management is keen to encourage other IFP modules to trial them.
In Spring 2017, a member of the Blackboard Team delivered a Staff Development Session on GradeMark to the IFP team.
We are currently exploring the possibility of doing our internal and external moderation electronically.
Dr Geoff Taggart is a lecturer in the Institute of Education and Programme Director for the Early Years Practice programme at Reading. As part of his secondment to the EMA programme, Geoff decided to run a focus group with students from the IoE to gather perspectives on electronic feedback and grading methods.
To identify student views on:
• The perceived benefits of the three forms of most commonly- used feedback offered by Grademark (i.e. Quickmarks, rubrics and text comments)
• Preferences regarding the emphasis which each form of feedback should be given in a typical piece of work
• Views regarding the interrelationship of the different forms of feedback
The focus group was composed of 4 MA students (2 international and 2 home), plus one Chinese academic visitor with recent experience of being a student. Their views were therefore representative of students engaged in social science disciplines and may not be transferable to other fields. Also in attendance were myself, Dr Maria Kambouri (engagement in feedback project) and Jack Lambert-Taylor (EMA). It took place at London Road campus between 5 and 6.30pm on Thurs 18th January.
I provided participants with three copies of the same assignment, one marked exclusively with Quickmarks, one marked only with the final text comment and one marked solely according to the rubric. The purpose of this was to isolate and focus attention upon each of the three kinds of electronic feedback provided through the Feedback Studio.
The marking was not meant to be typical (nor as examples of best practice) but to highlight the positive and negative qualities of each kind of feedback. For example, there were a lot more quickmark comments appended to the assignment than would usually occur. The purpose of this was to emphasise both the positive benefits of maximised contextualised feedback and the negative impression of ‘overload’ which the comments could give. Additionally, the text comments amounted to over 2500 words and were extremely conversational and wide-ranging.
In a similar way, whilst this strategy deliberately emphasised the dialogical and personal nature of this feedback method, it was also not easy to straightforwardly pick out those points where the student needed to improve. By contrast, the rubric does this very clearly but is not a personal way of providing feedback.
• Students appreciated Quickmarks which contained hyperlinks (e.g. to Study Advice)
• One participant noted that they didn’t like the Quickmarks, on the basis that when printed the document does not have interactive links. The same participant suggested that excessive Quickmarks may be intrusive, and give the impression of ‘massacring’ a student’s work. They agreed that less excessive use would be preferable. The same participant noted that there was ‘no positive’ or ‘constructive’ feedback on the page- only problem points. This may be due to the nature of the sample work, which was deliberately of a poor standard; perhaps the same study should be conducted with a high quality piece of work.
• Another participant noted that narrative summaries can come across as more personal, particularly if negative, and that they preferred Quickmarks on the basis that they provided a more objective tone. Another participant suggested that Quickmarks may come across as more ‘humane’ on that basis, rather than a ‘rant at the end’.
• Another participant suggested that Quickmarks provide good evidence of the thoroughness of the marking process.
• One participant suggested that Quickmarks could indicate to which assessment criteria in the rubric it refers. The facility to do this was explained
• It was noted that Quickmarks should be written passively rather that directed at the author, as it can appear more accusatory. For example, ‘The point is not clear here’ as opposed to ‘you have not been clear here’.
Summary – Quickmarks should be limited in their use, include positive as well as negative comments, include relevant hyperlinks and be focussed on the assignment rather than the student and associated with rubric criteria where possible.
• Two participants suggested that narrative summary can provide more detailed feedback and valued the conversational tone. It was also suggested that Quickmarks may be perceived as momentary thoughts without reflection, whilst narrative summary may come later after further thought.
• One participant noted that when you write an essay you aren’t ‘just trying to tick boxes in a rubric, you are trying to say something’. This was a really interesting point which emphasised the student expectation of a personal, dialogical relationship with their tutor (something which rich text comments support).
• Several participants noted that marking with more narrative summary would be more time-consuming, and expressed empathy for academics doing so.
• It was also noted that narrative summary would be better-fitted to a conversation in person, and that subtleties within the feedback would be better expressed through intonation in the voice and facial expressions of the marker. Absent those features, it can come across as very serious, and lacks intricacy.
• Students commented that this kind of feedback can also become too ‘waffly’ and lack focus.
Summary – This kind of feedback gives the strongest impression that the tutor has considered the assignment overall, mulled it over and arrived at a holistic impression, something that was highly valued (contrast with: ‘a marked rubric alone shows that the tutor perhaps didn’t think about it that much’). However, the writing needs to be clearly focussed on specific ways in which the student can improve (i.e. bullet points).
• Students commented positively that the rubric showed very clearly how successful an assignment had been in general terms. However, they were concerned that it does not explain how to improve if you have not done very well.
• Students questioned how the final mark is actually calculated through the use of a qualitative rubric where the different elements are unweighted – this was considered to lack full transparency.
• It was unanimously agreed that a rubric without comments was not a preferable form of feedback on its own due to lacking feed-forward information, despite the fact that the adjacent rubric statements (i.e. in the next grade band up) also appear to students in the feedback.
• Students did not like the way in which the rubric statements were represented in a consecutive list (see below) when printed off. They much preferred the grid they were used to (i.e. with grade boundaries as the columns and rubric criteria as the rows).
Summary – a rubric is useful in showing how successful an assignment has been in a broad and general sense. The only way in which it could be more useful would be if the rubric were more specific to this particular assignment (and so have multiple rubrics across programmes/the School)
1. All forms of feedback, taken together, were considered to be useful.
2. The three different forms of feedback need to support each other (e.g. the rubric needs to reflect the written comments, tutors could use the same language in their text comments as that used in the rubric statements)
3. No matter the means by which feedback is given, students want to feel as though their work has made an impression on their tutor.
4. If tutors want to mark mostly through Quickmarks and rubrics (and provide greatly reduced written comments), this may be perceived negatively by students who expect a more personalised response.
The following points may require consultation from Blackboard:
• One participant suggested that different colours may be used to indicate whether quickmark feedback is positive or negative.
• A tutor suggested that it would be helpful if tutors could have flexibility about where to position their Quickmarks in their set, otherwise they just appear rather randomly. This is an issue when marking at speed. )
• All participants suggested that they like the use of ticks in marking, but no alternative was suggested. Can a tick symbol be included in the quickmark set?
• Tutors are able to expand the rubric when marking. Can it be presented to students in this format?
Professor Will Hughes has extensively used rubric grids within Grademark across of all of his modules to significantly enhance student engagement with his feedback, student understanding of his marking criteria and student attainment in subsequent essays whilst making his own experience of marking more efficient.
My research interests include the control and management of building contracts, the management of design in construction, the analysis of organizational structure, and the analysis of procurement systems. The focus of my work is the commercial processes of structuring, negotiating, recording and enforcing business deals in construction. I have developed a framework for modelling and describing the myriad permutations of procurement variables, to aid in analysis and understanding of the complexities of organizing the procurement of built facilities. This has been incorporated into a British Standard (2011) on construction procurement.
As convenor of a range of modules typically enrolling 120 students submitting around 3 pieces of work each year, I wanted to ensure that I had a really effective approach to marking and the provision of feedback. I wanted all of my students to engage fully in the feedback I provided, to thoroughly understand exactly what they had done well and where they could be making improvements after reading each piece that I provided. But I needed to achieve all of this in an effective and efficient way.
National Student Survey results suggest that a significant number of students do not feel that they have access to marking criteria prior to submission and do not understand how to improve their performance based on the comments provided. Often the provision of more and
more free text feedback doesn’t appear to feed into higher levels of student attainment and satisfaction. At the same time, increasing student numbers and broader workload demands have increased pressures on all lecturers across the sector. In response I decided to adopt the use of rubric grids as one way to start to address these key issues.
In 2015 I created a rubric grid in which I listed criteria along the left hand side and then unpicked what performance levels against each of those criteria might look like, describing different levels of performance in the lower 10 or 20 range all the way up to outstanding performance in the 90 or 100 range. It was extremely interesting to attempt a clear articulation of the differences between grades of failure and grades of excellence. Explaining, for example, the difference between 90 and 100% for a specific criterion is not something I had ever done before. A screenshot of a typical grid is shown below.
I actually created a slightly different grid for each piece of assessment but it would be equally possible to create a slightly less assessment specific grid that could be used across a whole module or even a whole programme.
Crucially, I shared the criteria for assessment with my students in the assignment brief so they knew, well ahead of submission, what the marking criteria themselves looked like.
I created all of this content in a standard Excel spreadsheet first and then clicked on the ‘rubric manager’ button and then ‘import’ to transfer my grid into Grademark. I could have created it directly within Grademark, in an incredibly simple process, by clicking on the ‘rubric’ icon, ‘rubric manager’, ‘rubric list’ and then ‘create new rubric’. I could then populate my grid with specific criteria and scales. By attaching the rubric grid to one assignment, Grademark automatically attaches the grid to all the assignments within the submission point.
This meant that each time I opened a piece of work in Grademark, I could click on the rubric icon to display the grid. I could then simply click the box against each criteria that applied to the particular assessment I was marking to show the student how they had performed in that particular skill.
In addition to using the rubric grid to classify student performance against individual marking criteria I would also provide in text comments and general comments in the free text feedback box to ensure really tailored and specific feedback content was also provided to all of my students. As I have become more experienced, I have tried to stop myself from adding in-text comments as it tends to result in detailed editing comments, which are not as helpful as feedback.
From the first time I used this approach, students have been enthusiastic. They have emailed my personally, as well as commenting in module evaluation forms, that they found the feedback more useful than anything they has received in their education to date. I no longer have students complaining that their mark is too low and asking whether I have made a mistake. Rather, those who would have complained begin by acknowledging that the mark is clear and well- justified and that they would like to discuss how to improve. This positive approach from students is refreshing.
One of the things that made this activity successful was the prior development of a feedback library, which provided a wide-ranging list of comments to draw from and summarise. Another has been the move towards making comments positive rather than negative. It can be very difficult to focus on what students have done well in a poor submission. But it has proved to be the single most valuable thing. The performance of weak students improves significantly when they are given encouragement rather than discouragement. And strong students appreciate being given indications about how they could improve, as well, which, they tell, me is rare but welcome. I still have a way to go in making all of the comments positive and encouraging. If I were starting over, I would begin by spending time on thinking seriously about how to sound encouraging and positive when students submit very low-quality work. One thing to be careful about is that once the rubric has been attached to an assignment, it cannot be edited without being detached and losing all the grading. At first, I copied every mark into an Excel spreadsheet in case there were errors or omissions in my rubric that I hadn’t noticed until using it.
Every piece of work I set up in Turnitin gives me the opportunity to fine tune the approach. Each piece of work has its own criteria for assessment, so I tend to develop the rubrics in Excel, making them easier to adapt for the next piece of work. This also makes it easy to share with colleagues. If anyone would like further examples, I would be happy to share more recent ones as Excel files.
Dr Madeleine Davies and Michael Lyons, School of Literature and Languages
The Department of English Literature (DEL) has run two student focus groups and two whole-cohort surveys as part of our Teaching and Learning Development Fund‘Diversifying Assessments’ project. This is the second of two T&L Exchange entries on this topic. Click here for the first entry which outlines how the feedback received from students indicates that their module selection is informed by the assessment models that are used by individual modules. Underpinning these decisions is an attempt to avoid the ‘stress and anxiety’ that students connect with exams. The surprise of this second round of focus groups and surveys is the extent to which this appears to dominate students’ teaching and learning choices.
Having used focus groups and surveys to provide initial qualitative data on our assessment practices, we noticed a widespread preference for alternatives to traditional exams (particularly the Learning Journal), and decided to investigate the reasons for this further. The second focus group and subsequent survey sought to identify why the Learning Journal in particular is so favoured by students, and we were keen to explore whether teaching and learning aims were perceived by students to be better achieved via this method than by the traditional exam. We also took the opportunity to ask students what they value most in feedback: the first focus group and survey had touched on this but we decided this time to give students the opportunity to select four elements of feedback which they could rank in order or priority. This produced more nuanced data.
The results in the ‘Feedback’ sections are valuable for DEL: they indicate that clarity, diagnosis, and solutions-focused comments are key. In addressing our feedback conventions and practices, this input will help us to reflect on what we are doing when we give students feedback on their work.
The results of the focus group and of the subsequent survey do, however, raise some concerns about the potential conflict between ‘student choice’ and pedagogical practice. Students indicate that they not only want to avoid exams because of ‘stress’, but that they would also like to be able to select assessment methods within modules. This poses problems because marks are in part produced ‘against’ the rest of the batch: if the ‘base-line’ is removed by allowing students to choose assessment models, we would lack one of the main indicators of level.
In addition, the aims of some modules are best measured using exams. Convenors need to consider whether a student’s work can be assessed in non-exam formats but, if an exam is the best test of teaching and learning, it should be retained, regardless of student choice.
If, however, students overwhelmingly choose non-exam-based modules, this would leave modules retaining an exam in a vulnerable position. The aim of this project is to find ways to diversify our assessments, but this could leave modules that retain traditional assessment patterns vulnerable to students deselecting them. This may have implications for benchmarking.
It may also be the case that the attempt to avoid ‘stress’ is not necessarily in students’ best interests. The workplace is not a stress-free zone and it is part of the university’s mission to produce resilient, employable graduates. Removing all ‘stress’ triggers may not be the best way to achieve this.
With thanks to Lauren McCann of TEL for sending me the first link which includes a summary of students’ responses to various types of ‘new’ assessment formats.
The ‘Diversifying Assessment in DEL’ TLDF Mini-Project revealed several compelling reasons for reflecting upon assessment practice within a traditional Humanities discipline (English Literature):