Electronic Management of Assessment: Creation of an e-Portfolio for PWP training programmes

Tamara Wiehe, Charlotte Allard & Hayley Scott (PWP Clinical Educators)

Charlie Waller Institute; School of Psychology and Clinical Language

Overview

In line with the University’s transition to Electronic Management of Assessment (EMA), we set out to create an electronic Portfolio (e-Portfolio) for use on our Psychological Well-being Practitioner (PWP) training programmes to replace an existing hard-copy format. The project spanned almost 1 year (October 2018- September 2019) as we took the time to consider the implications on students, supervisors in our IAPT NHS services, University administrators and markers. Working closely with the Technology Enhanced Learning (TEL) team led us to a viable solution that has been launched with our new cohorts from September 2019.

Image of portfolio template cover sheet

Objectives

  • Create an electronic Portfolio in line with EMA that overcomes existing issues and improves the experience for students, NHS supervisors, administrators and markers.
  • Work collaboratively with our all key stakeholders to ensure that the new format satisfies their various needs.

Context

A national requirement for PWPs is to complete a competency-based assessment in the form of a Portfolio that spans across their three modules of their training. Our students are employed by NHS services across the South of England and many live close to their service rather than the University.

The issue? The previous hard-copy format meant that students spent time and money printing their work and travelling to the University to submit/re-submit it. University administrators and markers reported issues with transporting the folders to markers and storing them, especially with the larger cohorts.

The solution… To resolve these issues by transitioning to an electronic version of the Portfolio.

Implementation

  1. October 2018: An initial meeting with TEL was held in order to discuss the practicalities of an online Portfolio submission.
  2. October 2018 – March 2019: TEL created several prototypes of options for submission via Blackboard including the use of the journal tool and a zip file. Due to practicalities, the course team decided on a single-file word document template.
  3. April – May 2019: Student focus groups were conducted with both programmes (undergraduate and postgraduate) where the same assessment sits to gain their feedback with the potential solution we had created. Using the outcomes of the focus groups and staff meetings, it was unanimously agreed that the proposed solution was a viable option for use with our future cohorts.
  4. June 2019: TEL delivered a training session for staff and admin to become familiar with the process from both student and staff perspective. TEL also created a guidance document for administrators on how to set up the assignment on Blackboard.
  5. July – August 2019: Materials including the template and rubrics were amended and formatted in order to meet requirements for online submission for both MSci and PWP courses. Resources were also created for students to access on Blackboard such as screen casts on how to access, utilise and submit the Portfolio using the electronic format; the aim of this is to improve accessibility for all students participating on the course.
  6. September 2019: Our IAPT services were notified of the changes as the supervisors there are responsible for reviewing and ‘signing off’ on the student’s performance before the Portfolio is submitted to the University for a final check.

Image of 'how to' screen cast resources on Blackboard

Impact

Thus far, the project has achieved the objectives it set out to. The template for submission is now available for students to complete throughout their training course. This will modernise the submission process and be less burdensome for the students, supervisors, administrators and markers.

Image of the new portfolio process

The students in the focus group reported that this would significantly simplify the process and relieve the barriers they often reported with completing and submitting the Portfolio. Currently, there have not been any unexpected outcomes with the development of the Portfolio. However, we aim to review the process with the first online Portfolio submission in June 2020.

Reflections

Upon reflection, the development of the online Portfolio has so far been a success. Following student feedback, we listened to what would improve their experience of completing the Portfolio. From this we developed an online Portfolio, meeting the requirements across two BPS accredited courses which will be used for future cohorts of students.

Additionally, the collaboration between staff, students and the TEL team, has led to improved communication across teams with new ideas shared; this is something we have continued to incorporate into our teaching and learning projects.

An area to develop for the future, would be to utilise a specific Portfolio software. Initially, we wanted to use a journal tool on Blackboard, however, it was not suitable to meet the needs of the course (most notably exporting the submission and mark sheet to external parties). We will continue to review these options and will continue to gain feedback from future cohorts.

 

Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola

OBJECTIVES

For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.

CONTEXT

I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.

IMPLEMENTATION

I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.

IMPACT

My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.

REFLECTIONS

I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.

FOLLOW UP

On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.

LINKS

• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3. https://www.uncw.edu/jet/articles/vol7_1/Wolf.pdf
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 https://www.tandfonline.com/doi/pdf/10.1080/09695941003 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ https://guides.turnitin.com/01_Manuals_and_Guides/Instru ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
/Rubrics_and_Grading_Forms
• Blackboard, ‘Grade with Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Grade_with_Rubrics
• Blackboard, ‘Import and Export Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Import_and_Export_Rubrics

Making full use of grademark in geography and environmental science – Professor Andrew Wade

 

Profile picture for Prof. Andrew Wade

Professor Andrew Wade is responsible for research in hydrology, focused on water pollution, and Undergraduate and Postgraduate Teaching, including Hydrological Processes

OBJECTIVES

Colleagues within the School of Archaeology, Geography and Environmental Sciences (SAGES) have been aware of the University’s broader ambition to move towards online submission, feedback and grading where possible. Many had already made the change from paper based to online practices and others felt that they would like the opportunity to explore new ways of providing marks and feedback to see if handling the process online led to a better experience for both staff and students.

CONTEXT

In Summer 2017 it was agreed that SAGES would become one of the Early Adopter Schools working with the EMA Programme. This meant that the e Submission, Feedback and Grading work stream within the Programme worked very closely with both academic and professional colleagues within the School from June 2017 onwards. This was in order to support all aspects of a change from offline to online marking and broader processes for all coursework except where there was a clear practical reason not to, for example, field note-books.
I had started marking online in 2016-2017 so was familiar with some aspects of marking tools and some of the broader processes.

IMPLEMENTATION

My Part 2 module, GV2HY Hydrological Processes, involves students producing a report containing two sections. Part A focuses on a series of short answers based on practical-class experiences and Part B requires students to write a short essay. I was keen to use all of the functionality of Grademark/Turnitin during the marking process so I spent time creating my own personalised QuickMark bank so that I could simply pull across commonly used feedback phrases and marks against each specific question. This function was particularly useful to use when marking Part A. I could pull across QuickMarks showing the mark and then, in the same comment, explain why the question received, for example, 2 out of a possible 4 marks. It was especially helpful that my School sent around a discipline specific set of QuickMarks created by a colleagues. We could then pull the whole set or just particular QuickMarks into our own personalised set if we wanted to. This reduced the time spend on personalising and meant that the quality of my own set was improved further.

I also wanted to explore the usefulness of rubric grids as one way to provide feedback on the essay content in Part B of the assignment. A discipline specific example rubric grid was created by the School and send around to colleagues as a starting point. We could then amend this rubric to fit our specific assessment or, more generally, our modules and programmes. The personalised rubrics were attached to assignments using a simple process led by administrative colleagues. When marking I would highlight the level of performance achieved by each student, against each criteria by simply highlighting the box in blue. This rubric grid was used alongside both QuickMarks and in text comments in the essay. More specific comments were given in the blank free text box to the right of the screen.

IMPACT

Unfortunately module evaluation questionnaires were distributed and completed before students received feedback on their assignments so the student reaction to online feedback using QuickMarks, in text comments, free text comments and rubrics was not captured.

In terms of the impact on the marker experience, after spending some initial time getting my personal Quickmarks library right and amending the rubric example to fit with my module, I found marking online easier and quicker than marking on paper.

In addition to this, I also found that the use of rubrics helped to ensure standardisation. I felt comfortable that my students were receiving similar amounts of feedback and that this feedback was consistent across the cohort and when returning to marking the coursework after a break. When moderating coursework, I tend to find more consistent marking when colleagues have used a rubric.
I also felt that students received more feedback than they usually might but am conscious of the risk that they that drown in the detail. I try to use the free text boxes to provide a useful overall summary to avoid overuse of QuickMarks.

I don’t worry now about carrying large amounts of paper around or securing the work when I take assignments home. I also don’t need to worry about whether the work I’m marking has been submitted after the deadline – under the new processes established in SAGES, Support Centre colleagues deduct marks for late submission.

I do tend to provide my cohorts with a short piece of generic feedback, including an indicator of how the group performed-showing the percentage of students who had attained a mark in each class. I could easily access this information from Grademark/Turnitin.

I’m also still able to work through the feedback received by my Personal Tutees. I arrange individual sessions with them, they access ‘My Grades’ on Blackboard during this meeting and we work through the feedback together.

One issue was that, because the setting were set up in a particular way, students could access their feedback as soon as we had finished writing it. This issue was identified quickly and the settings were changed.

REFLECTIONS

My use of online marking has been successful and straightforward but my experience has been helped very significantly by the availability of two screens in my office. These had already been provided by School but became absolutely essential. Although I largely mark in my office on campus, when I mark from home I set up two laptops next to each other to replicate having two screens. This set up allows me to be able to check the student’s work on one screen whilst keeping their coursework on the other.

One further area of note is that the process of actually creating a rubric prompted a degree of reflection over what we actually want to see from students against each criteria and at different levels. This was particularly true around the grade classification boundaries-what is the different between a high 2:2 and a low 2:1 in terms of each of the criteria we mark against and how can we describe these differences in the descriptor boxes in a rubric grid so that students can understand.

This process of trying to make full use of all of the functions within our marking tools has led to some reflection surrounding criteria, what we want to see and how we might describe this to students.

LINKS

For more information on the creation and use of rubrics within Grademark/Turnitin please see the Technology Enhanced Learning Blog pages here:
http://blogs.reading.ac.uk/tel/support-blackboard/blackboard-support- staff-assessment/blackboard-support-staff-turnitin/turnitin-rubrics/

Introducing online assessment in IFP modules – Dr Dawn Clarke

OBJECTIVES

Colleagues within the IFP wanted to improve the student assessment experience. In particular we wanted to make the end to end process quicker and easier and reduce printing costs for students. We also wanted to offer some consistency with undergraduate programmes. This was particularly important for those students who stay in Reading after their foundation year to undertake an undergraduate degree. We were also keen to discover if there would be any additional benefits or challenges which we had not anticipated.

CONTEXT

No IFP modules had adopted online submission, grading and feedback until Spring 2015. We were aware of a number of departments successfully running online assessment within the University and the broader move towards electronic management of assessment within the sector as a whole. We introduced online assessment for all written assignments, including work containing pictures and diagrams, onto the IFP module ‘Politics’ (PO0POL) and ‘Sociology’ (PO0SOC) in 2015.

IMPLEMENTATION

We made the decision very early in the process that we would use Turnitin Grademark within Blackboard Gradecenter. This was consistent with existing use in the Department of Politics.
We created a set of bespoke instructions for students to follow when submitting their work and when viewing their feedback. These instructions were based on those provided by the Technology Enhanced Learning Team but adjusted to fit our specific audience. These were distributed in hard copy and we spent some time in class reviewing the
process well before the first submission date.

Submission areas in Blackboard and standard feedback rubric sections were created by the Departmental Administrator who was already highly experienced.

IMPACT

Overall the end to end assessment process did become easier for students. They didn’t have to travel to campus to submit their assignments and they enjoyed instant access to Turnitin.
Turnitin itself became a very useful learning tool for pre degree foundation students. It not only provided initial feedback on their work but prompted a dialogue with the marker before work was finally submitted. For students right at the start of their university experience this was extremely useful.

It was equally useful to automate deadlines. Students very clearly understood the exact time of the deadline. The marker was external to this process allowing them to adopt a more neutral position. This was more transparent than manual systems and ensured a visibly consistent experience for all students.

In addition to this, because students did not have to print out their assignments, they became much more likely to include pictures and diagrams to illustrate their work. This often improved the quality of submission.

All students uploaded their essays without any additional help. A small number also wanted to upload their own PowerPoint presentations of their in class presentations at the same time which meant that we needed to work through the difficulty of uploading two files under one submission point.

Moving to online assessment presented a number of further challenges. In particular, we became aware that not all students were accessing their feedback. Arranging online access for external examiners in order to moderate the work presented a final challenge. We then worked to address both of these issues.

REFLECTIONS

It would be really helpful to explore the student experience in more depth. One way to do this would be to include a section specifically focused on feedback within IFP module evaluation forms.
In the future we would like to make use of the audio feedback tool within Gradecenter. This will maximise the experience of international
students and their chances of developing language skills.

Using Quickmarks to enhance essay feedback in the department of English Literature – Dr Mary Morrissey

Within the department, I teach primarily in Early Modern and Old English. For more details of my teaching please see Mary Morrissey Teaching and Convening

My primary research subject is Reformation literature, particularly from London. I am particularly interested in Paul’s Cross, the most important public pulpit in sixteenth and seventeenth-century England. I retain an interested in early modern women writers, with a particular focus on women writers’ use of theological arguments. Further details of my research activities can be found at Mary Morrissey Research

OBJECTIVES

A number of modules within the Department of English Literature began using GradeMark as a new marking tool in the Autumn of 2015. I wanted to explore the use of the new QuickMarks function as a way of enhancing the quality of the feedback provided to our students and ensuring the ‘feedback loop’ from general advice on essay writing to the feedback on particular pieces of assessed work was completed.

CONTEXT

The Department developed extensive guidance on writing skills to support student assessment: this includes advice on structuring an argument as well as guidance on grammar and citations. This guide was housed on departmental handbooks and in the assignments folder in Blackboard. There was considerable concern that this resource was underused by students. We did know that the QuickMarks function was being used as part of our online feedback provision and that it was possible to personalise the comments we were using and to add links to those comments as a way of providing additional explanation to students.

IMPLEMENTATION

In order to allow relevant sections of the essay writing style guide to be accessed via QuickMarks I copied the document into a Google Doc, divided each section by using Google Doc bookmarks and assigned each bookmark an individual URL link. I then used Bitly.com to shorten the URL link assigned to each section by the Google Doc to make it more useable. I then created a set of Quickmarks that included these links to the Style Guide. In this way, students had direct access to the relevant section of the Guide while reading their feedback. So if a student hadn’t adopted the correct referencing format (the Modern Humanities Research Association style in the case of English Literature) the marker would pull a QuickMark across to the relevant point of the essay. When the student hovered over this comment bubble, they would see the text within it but were also able to click on the URL taking them directly to page 7 of the departmental writing style guide on MHRA citation and referencing. If other colleagues wanted to start adopting the same approach, I simply exported the QuickMark set to them which they incorporated into their own QuickMarks bank within seconds.

IMPACT

The Bitly.com tool, used to shorten the URL link, monitored the usage of each link included in our QuickMarks. This showed us how many times and on which date each individual link was used.

To complement this data I also ran a survey on the student response to online marking and feedback. 35 undergraduate students responded. This showed that students found feedback most useful when it came in forms that were familiar from paper marking, like general comments on the essay and marginal comments throughout the essay. Less familiar types of feedback (links to web-resources included in bubble comments accessed by hovering the cursor) were often missed. In the survey, 28 out of 35 students said that they did not receive any links to the writing style guide within their QuickMark comments even though more than this did receive them. 3 students did not click on the links. Of the 5 remaining students who did make use of the links, 3 responded positively, mentioning their value in terms of improving their writing skills:

“It was good to refer to alongside my work”
“They helped me to strengthen my writing overall”
“Yes motivational to actually look at them-whereas on a paper copy you might read he comment and forget but here you can click straight through so much easier!”

REFLECTIONS

Some of the new functions available to us on GradeMark allow us to improve our feedback. We shouldn’t just be using online marking tools to replicate existing off line marking processes. We can go much further! But if this is going to be successful it is really important to inform students about the range of options that online marking makes available so that they make the most of the systems we use.

Once we do this effectively, we can then explore other options. In English Literature, we are keen to ensure that our Department style guide is used effectively. But there are many other web resources to which we could link through Quickmarks: screencast essay writing guides in Politics and IWLP, as well as the new Academic Integrity toolkit by Study Advice, for example.

By including links within QuickMark comments we help to move students towards greater levels of assessment literacy.

LINKS

Academic Integrity Toolkit

http://libguides.reading.ac.uk/academicintegrity

Examples of assessment support screencasts created by colleagues

Screencast bank

Study Support Screencast Suite

https://www.reading.ac.uk/library/study-advice/guides/lib-sa- videos.aspx

Bitly URL shortener and link management platform

https://bitly.com/

MOVING TOWARDS E-ASSESSMENT: The Use of Electronic Submission and Grading on the Academic Skills Module – Svetlana Mazhurnaya

Profile picture for Svetlana Mazhurnaya

I have been teaching English for over 15 years. I worked on EFL courses in Russia and the UK between 2000 – 2012. I started teaching English for Academic Purposes in 2013 when I joined the International Foundation Programme at the University of Surrey. I have been working as an EAP tutor at the University of Reading since 2014, first on the International Foundation Programme and now on the Pre-sessional English Programme. I have recently become part of the assessment group within ISLI, which creates and administers tests of EAP.

OBJECTIVES

• To familiarise Foundation level students with e-assessment practices as part of their preparation for Undergraduate Courses at UoR
• To simplify assessment administration procedure for multiple module subgroups with varied deadlines on a 20-credit module
• To reduce the marking workload associated with paper submissions
• To deliver more timely and accessible feedback to students

CONTEXT

The International Foundation Programme has a 15-module portfolio delivered by various UoR departments. International Students joining the course have to manage multiple assessment deadlines and follow academic assessment practices used within the departments delivering their core modules. In order to support them, IFP runs a 20-credit Academic Skills module taught over 2 hours per week and assessed through a combination of formative and summative oral and written assignments marked off-line. A combination of word documents, excel spreadsheet and online RISIS reports are used for assessment data administration. When I joined the programme in 2015, the team were looking for ways of:

• optimising the administration of a large volume of paper submissions with multiple sub-group deadlines

• reducing the tutors’ marking workload & simplifying the assessment data entry process

• gauging the level of learners’ engagement with feedback

IMPLEMENTATION

The trial

• Having previously used electronic marking tools, I was keen to introduce them on the IFP. With the Module Convenor’s support, I started trialling the Turnitin e-submission and grading tools with my sub-group in spring 2015. It was agreed that a formative assessment piece would be suitable for the trial to allow space for an error and that learner training could be integrated into the module syllabus as part of developing the students’ referencing and source integration skills. There were 3 classroom demonstrations: how to submit work, how to check originality reports and how to access electronic feedback. Learners were also signposted to the learner training resources available on Blackboard. Some students requested further guidance and were supported through a peer-led demonstrations in subsequent lessons. The fact that most students managed the e-submission with minimal training was an encouraging start.

• For the purposes of maintaining consistency in feedback delivery with other module subgroups I created a QuickMarks set based on the existing module error correction codes that all of the tutors used and hyper-linked them to the online practice materials we normally recommend to students when suggesting areas for improvement. I also uploaded our mark scheme as a Turnitin rubric. Similarly, I provided global feedback comments on submitted work. The only difference in the feedback delivery was its online mode and the fact that QuickMarks were associated with one of the 5 assessment criteria such as “organization” or “task completion”, hopefully making the rationale behind the grading more explicit.

• Students reacted favourably to receiving electronic feedback, saying that they liked having instant access to their grades through “My Grades” feature and that word-processed comments were easier to understand for international students than handwritten ones. They also like the fact that QuickMarks we use are hyperlinked to external practice materials. This allows them to work independently. For example, a comment on referencing issues is linked to the referencing guidelines page.

• Interestingly, the electronic assignment inbox showed that the students’ level of engagement with feedback varied: some viewed the marks but did not access the detailed feedback; others read the comments but did not explore the hyperlinks. This has prompted us to run follow-up tutorials that students have to prepare for using tutor’s feedback. Overall, the trial was largely successful but highlighted the need for some more learner training in how to process e-feedback.

Sharing practice

• Because the online marking procedure used with the trial group was largely replicating our existing off-line marking procedure in a less time-consuming way, other module tutors were keen to experiment with e-assessments. The Programme Director and the Module Convenor were very supportive and allowed me to spend time on one-to-one consultations with team members in order to demonstrate the benefits of using e-assessment tools and train them if they wished to trial them.

Wider implementation

• Over the next couple of terms it was decided to introduce e- submission for all written coursework assignment in order to optimise the administration process. However, tutors were allowed the flexibility of marking online or downloading e- submissions in order to mark them in Word or print papers. This approach met our staff training needs and working styles.
The challenge at this stage was that the e-feedback and grades had to be transferred into the official feedback forms and spreadsheets for consistency purposes. In order to avoid multiple data entry, we decided to start using the Turnitin rubric and the Blackboard Grade centre. Creating a Turnitin rubric was easy and eliminated the need for calculating grades in excel documents and transferring them to a master spreadsheet. We have not moved away from excel documents completely but have significantly reduced the manual data entry load.

• By autumn 2016 all Academic Skills written assignments were submitted and graded online

IMPACT

Effect on the students

• Students find the new submission procedure, with a single submission point and an electronic receipt system, easier to follow.

• Many IFP students have used the opportunity to submit work remotely while visiting their families abroad during holidays.

• Many students are using Turnitin Originality reports as a formative learning tool that helps them see how well they have paraphrased or referenced source material and revise their drafts independently more, which has resulted in fewer cases of unintentional plagiarism.

• There is a greater transparency to learners as to how their mark was arrived at because they can see the number and type of QuickMarks comments that are associated with each criterion their work has been graded on.

• Generally, they now view e-submission and feedback as part of the daily university activities, which prepares them for the reality of the academic studies on their future degree courses.
Effect on the Tutors

• Using e-submission has decreased the burden of assessment administration: instead of sorting large volumes of student papers into sub-groups manually tutors use GradeCentre SmartViews to filter out their students’ submissions.

• Non-submitters are identified and sent a reminder earlier. In the past non-submitters could only be identified after the anonymous marking process has been completed, which often resulted in a hefty penalty. Now a tutor or the module convenor uses the “e-mail non-submitters” button right after the deadline to chase the students (even if marking is anonymous). As a result, students who failed to submit their assignment or uploaded to the wrong submission point receive an early reminder. For many IFP students, it is a learning curve and getting an early reminder helps them.

• Marking has become easier with Turnitin: tutors can manage the 15 days turnaround time better because they can start marking straight after the deadline and not have to wait until the printed copies are distributed. Many find QuickMarks hyperlinked to external practice or reference materials helpful as a way to feed forward without giving a lengthy explanation. Some tutors reported being slowed down by internet connection issues. It also took us some time to adjust to the
need Feedback Studio Interface.

• Using electronic assessment tools has also prompted a professional dialogue about our current assessment practices and highlighted the need for protocols on e-submissions, e- moderation and external examining. So it is great news that such guidelines are being developed as part of the EMA work.

• We have gained a better overview of IFP students’ engagement levels because GradeMark allows us to identify and contact non-submitters at one click. It also shows us the number of submission attempts and whether students have accessed feedback prior to tutorials. This helps us to support at risk students better.

Effects on the Module Convenor

• The module convenor has gained a better real-time overview of the marking process: number of scripts marked so far, marking analytics (average, standard deviation, range), all displayed in the GradeCentre column statistics. This has allowed the module convenor to support the tutors by re-distributing scripts or helping to mark and moderate.

• The module convenor can also see how much feedback is given to students across the board, which is important for quality assurance purposes.

• Dealing with possible cases of academic misconduct and late submissions has become easier thanks to Turnitin originality reports and electronic receipt system.

REFLECTIONS

• Our team’s experience has shown that it is worthwhile trying to integrate electronic assessment literacy into the course syllabus. It would also be great if there were university-wide learner-training sessions, similar to CQSD sessions offered to staff.

• Moving our module toward e-assessment was manageable
because our approach to electronic tools has been selective: where our current assessment practices worked well, we only sought to replicate them. When a change was needed, we looked for ways technology could be used to implement it.

• Sharing best practice and providing peer support has proven to be a good way of encouraging more colleagues to use e- assessment tools, because it was not perceived as a top-down driven change.

• Having the programme management support has really helped our small community of e-practitioners to grow. Creating training opportunities and allowing some flexibility during the transition to e-practice have been key to its success. There was a point when our exploratory e-assessment practices needed to be more standardized and programme level decisions were key to maintaining consistency of practice.

FOLLOW UP

Following the successful trial of the e-assessment tools on the Academic Skills and International English Modules, the programme management is keen to encourage other IFP modules to trial them.

In Spring 2017, a member of the Blackboard Team delivered a Staff Development Session on GradeMark to the IFP team.

We are currently exploring the possibility of doing our internal and external moderation electronically.

ELECTRONIC FEEDBACK AND GRADING METHODS – Dr Geoff Taggart

Profile picture for Dr Taggart

Dr Geoff Taggart is a lecturer in the Institute of Education and Programme Director for the Early Years Practice programme at Reading. As part of his secondment to the EMA programme, Geoff decided to run a focus group with students from the IoE to gather perspectives on electronic feedback and grading methods.

OBJECTIVES

To identify student views on:

• The perceived benefits of the three forms of most commonly- used feedback offered by Grademark (i.e. Quickmarks, rubrics and text comments)

• Preferences regarding the emphasis which each form of feedback should be given in a typical piece of work

• Views regarding the interrelationship of the different forms of feedback

CONTEXT

The focus group was composed of 4 MA students (2 international and 2 home), plus one Chinese academic visitor with recent experience of being a student. Their views were therefore representative of students engaged in social science disciplines and may not be transferable to other fields. Also in attendance were myself, Dr Maria Kambouri (engagement in feedback project) and Jack Lambert-Taylor (EMA). It took place at London Road campus between 5 and 6.30pm on Thurs 18th January.

IMPLEMENTATION

I provided participants with three copies of the same assignment, one marked exclusively with Quickmarks, one marked only with the final text comment and one marked solely according to the rubric. The purpose of this was to isolate and focus attention upon each of the three kinds of electronic feedback provided through the Feedback Studio.

The marking was not meant to be typical (nor as examples of best practice) but to highlight the positive and negative qualities of each kind of feedback. For example, there were a lot more quickmark comments appended to the assignment than would usually occur. The purpose of this was to emphasise both the positive benefits of maximised contextualised feedback and the negative impression of ‘overload’ which the comments could give. Additionally, the text comments amounted to over 2500 words and were extremely conversational and wide-ranging.

In a similar way, whilst this strategy deliberately emphasised the dialogical and personal nature of this feedback method, it was also not easy to straightforwardly pick out those points where the student needed to improve. By contrast, the rubric does this very clearly but is not a personal way of providing feedback.

REFLECTIONS

Quickmark feedback

• Students appreciated Quickmarks which contained hyperlinks (e.g. to Study Advice)

• One participant noted that they didn’t like the Quickmarks, on the basis that when printed the document does not have interactive links. The same participant suggested that excessive Quickmarks may be intrusive, and give the impression of ‘massacring’ a student’s work. They agreed that less excessive use would be preferable. The same participant noted that there was ‘no positive’ or ‘constructive’ feedback on the page- only problem points. This may be due to the nature of the sample work, which was deliberately of a poor standard; perhaps the same study should be conducted with a high quality piece of work.

• Another participant noted that narrative summaries can come across as more personal, particularly if negative, and that they preferred Quickmarks on the basis that they provided a more objective tone. Another participant suggested that Quickmarks may come across as more ‘humane’ on that basis, rather than a ‘rant at the end’.

• Another participant suggested that Quickmarks provide good evidence of the thoroughness of the marking process.

• One participant suggested that Quickmarks could indicate to which assessment criteria in the rubric it refers. The facility to do this was explained

• It was noted that Quickmarks should be written passively rather that directed at the author, as it can appear more accusatory. For example, ‘The point is not clear here’ as opposed to ‘you have not been clear here’.

Summary – Quickmarks should be limited in their use, include positive as well as negative comments, include relevant hyperlinks and be focussed on the assignment rather than the student and associated with rubric criteria where possible.

Text comments

• Two participants suggested that narrative summary can provide more detailed feedback and valued the conversational tone. It was also suggested that Quickmarks may be perceived as momentary thoughts without reflection, whilst narrative summary may come later after further thought.

• One participant noted that when you write an essay you aren’t ‘just trying to tick boxes in a rubric, you are trying to say something’. This was a really interesting point which emphasised the student expectation of a personal, dialogical relationship with their tutor (something which rich text comments support).

• Several participants noted that marking with more narrative summary would be more time-consuming, and expressed empathy for academics doing so.

• It was also noted that narrative summary would be better-fitted to a conversation in person, and that subtleties within the feedback would be better expressed through intonation in the voice and facial expressions of the marker. Absent those features, it can come across as very serious, and lacks intricacy.

• Students commented that this kind of feedback can also become too ‘waffly’ and lack focus.

Summary – This kind of feedback gives the strongest impression that the tutor has considered the assignment overall, mulled it over and arrived at a holistic impression, something that was highly valued (contrast with: ‘a marked rubric alone shows that the tutor perhaps didn’t think about it that much’). However, the writing needs to be clearly focussed on specific ways in which the student can improve (i.e. bullet points).

Rubric

• Students commented positively that the rubric showed very clearly how successful an assignment had been in general terms. However, they were concerned that it does not explain how to improve if you have not done very well.

• Students questioned how the final mark is actually calculated through the use of a qualitative rubric where the different elements are unweighted – this was considered to lack full transparency.

• It was unanimously agreed that a rubric without comments was not a preferable form of feedback on its own due to lacking feed-forward information, despite the fact that the adjacent rubric statements (i.e. in the next grade band up) also appear to students in the feedback.

• Students did not like the way in which the rubric statements were represented in a consecutive list (see below) when printed off. They much preferred the grid they were used to (i.e. with grade boundaries as the columns and rubric criteria as the rows).

Summary – a rubric is useful in showing how successful an assignment has been in a broad and general sense. The only way in which it could be more useful would be if the rubric were more specific to this particular assignment (and so have multiple rubrics across programmes/the School)

CONCLUSIONS

1. All forms of feedback, taken together, were considered to be useful.

2. The three different forms of feedback need to support each other (e.g. the rubric needs to reflect the written comments, tutors could use the same language in their text comments as that used in the rubric statements)

3. No matter the means by which feedback is given, students want to feel as though their work has made an impression on their tutor.

4. If tutors want to mark mostly through Quickmarks and rubrics (and provide greatly reduced written comments), this may be perceived negatively by students who expect a more personalised response.

FOLLOW UP

The following points may require consultation from Blackboard:

• One participant suggested that different colours may be used to indicate whether quickmark feedback is positive or negative.

• A tutor suggested that it would be helpful if tutors could have flexibility about where to position their Quickmarks in their set, otherwise they just appear rather randomly. This is an issue when marking at speed. )

• All participants suggested that they like the use of ticks in marking, but no alternative was suggested. Can a tick symbol be included in the quickmark set?

• Tutors are able to expand the rubric when marking. Can it be presented to students in this format?

LINKS

Quickmarks:

https://guides.turnitin.com/01_Manuals_and_Guides/Instructor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark/QuickMark

Rubrics:

https://guides.turnitin.com/01_Manuals_and_Guides/Instructor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark/Rubrics_and_Grading_F orms

Text comments:

https://guides.turnitin.com/01_Manuals_and_Guides/Instructor_Guides/Feedback_Studio/Commenting_Tools/Text_summary_comments

Using rubrics to transform the marking and feedback experience – Professor Will Hughes

Professor Will Hughes has extensively used rubric grids within Grademark across of all of his modules to significantly enhance student engagement with his feedback, student understanding of his marking criteria and student attainment in subsequent essays whilst making his own experience of marking more efficient.

Profile picture for Prof. Hughes

My research interests include the control and management of building contracts, the management of design in construction, the analysis of organizational structure, and the analysis of procurement systems. The focus of my work is the commercial processes of structuring, negotiating, recording and enforcing business deals in construction. I have developed a framework for modelling and describing the myriad permutations of procurement variables, to aid in analysis and understanding of the complexities of organizing the procurement of built facilities. This has been incorporated into a British Standard (2011) on construction procurement.

OBJECTIVES

As convenor of a range of modules typically enrolling 120 students submitting around 3 pieces of work each year, I wanted to ensure that I had a really effective approach to marking and the provision of feedback. I wanted all of my students to engage fully in the feedback I provided, to thoroughly understand exactly what they had done well and where they could be making improvements after reading each piece that I provided. But I needed to achieve all of this in an effective and efficient way.

CONTEXT

National Student Survey results suggest that a significant number of students do not feel that they have access to marking criteria prior to submission and do not understand how to improve their performance based on the comments provided. Often the provision of more and
more free text feedback doesn’t appear to feed into higher levels of student attainment and satisfaction. At the same time, increasing student numbers and broader workload demands have increased pressures on all lecturers across the sector. In response I decided to adopt the use of rubric grids as one way to start to address these key issues.

IMPLEMENTATION

In 2015 I created a rubric grid in which I listed criteria along the left hand side and then unpicked what performance levels against each of those criteria might look like, describing different levels of performance in the lower 10 or 20 range all the way up to outstanding performance in the 90 or 100 range. It was extremely interesting to attempt a clear articulation of the differences between grades of failure and grades of excellence. Explaining, for example, the difference between 90 and 100% for a specific criterion is not something I had ever done before. A screenshot of a typical grid is shown below.

I actually created a slightly different grid for each piece of assessment but it would be equally possible to create a slightly less assessment specific grid that could be used across a whole module or even a whole programme.

Crucially, I shared the criteria for assessment with my students in the assignment brief so they knew, well ahead of submission, what the marking criteria themselves looked like.

I created all of this content in a standard Excel spreadsheet first and then clicked on the ‘rubric manager’ button and then ‘import’ to transfer my grid into Grademark. I could have created it directly within Grademark, in an incredibly simple process, by clicking on the ‘rubric’ icon, ‘rubric manager’, ‘rubric list’ and then ‘create new rubric’. I could then populate my grid with specific criteria and scales. By attaching the rubric grid to one assignment, Grademark automatically attaches the grid to all the assignments within the submission point.

This meant that each time I opened a piece of work in Grademark, I could click on the rubric icon to display the grid. I could then simply click the box against each criteria that applied to the particular assessment I was marking to show the student how they had performed in that particular skill.

In addition to using the rubric grid to classify student performance against individual marking criteria I would also provide in text comments and general comments in the free text feedback box to ensure really tailored and specific feedback content was also provided to all of my students. As I have become more experienced, I have tried to stop myself from adding in-text comments as it tends to result in detailed editing comments, which are not as helpful as feedback.

IMPACT

From the first time I used this approach, students have been enthusiastic. They have emailed my personally, as well as commenting in module evaluation forms, that they found the feedback more useful than anything they has received in their education to date. I no longer have students complaining that their mark is too low and asking whether I have made a mistake. Rather, those who would have complained begin by acknowledging that the mark is clear and well- justified and that they would like to discuss how to improve. This positive approach from students is refreshing.

REFLECTIONS

One of the things that made this activity successful was the prior development of a feedback library, which provided a wide-ranging list of comments to draw from and summarise. Another has been the move towards making comments positive rather than negative. It can be very difficult to focus on what students have done well in a poor submission. But it has proved to be the single most valuable thing. The performance of weak students improves significantly when they are given encouragement rather than discouragement. And strong students appreciate being given indications about how they could improve, as well, which, they tell, me is rare but welcome. I still have a way to go in making all of the comments positive and encouraging. If I were starting over, I would begin by spending time on thinking seriously about how to sound encouraging and positive when students submit very low-quality work. One thing to be careful about is that once the rubric has been attached to an assignment, it cannot be edited without being detached and losing all the grading. At first, I copied every mark into an Excel spreadsheet in case there were errors or omissions in my rubric that I hadn’t noticed until using it.

FOLLOW UP

Every piece of work I set up in Turnitin gives me the opportunity to fine tune the approach. Each piece of work has its own criteria for assessment, so I tend to develop the rubrics in Excel, making them easier to adapt for the next piece of work. This also makes it easy to share with colleagues. If anyone would like further examples, I would be happy to share more recent ones as Excel files.

LINKS

Simple illustrated instructions to create similar qualitative rubric within Grademark, as well as standard or custom rubric and free response grading forms, can be found here:

https://guides.turnitin.com/01_Manuals_and_Guides/Instructor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark/Rubrics_and_Grading_F orms

A six minute Grademark video demonstrating the creation of a similar rubric and other key features is available here:

https://www.youtube.com/watch?v=BAG44Fpm55o

Connecting with the Curriculum Framework: Using focus groups to diversify assessment (Part 2)

Dr Madeleine Davies and Michael Lyons, School of Literature and Languages

Overview

The Department of English Literature (DEL) has run two student focus groups and two whole-cohort surveys as part of our Teaching and Learning Development Fund‘Diversifying Assessments’ project. This is the second of two T&L Exchange entries on this topic. Click here for the first entry which outlines how the feedback received from students indicates that their module selection is informed by the assessment models that are used by individual modules. Underpinning these decisions is an attempt to avoid the ‘stress and anxiety’ that students connect with exams. The surprise of this second round of focus groups and surveys is the extent to which this appears to dominate students’ teaching and learning choices.

Objectives

  • The focus groups and surveys are used to gain feedback from DEL students about possible alternative forms of summative assessment to our standard assessed essay + exam model. This connects with the Curriculum Framework in its emphasis on Programme Review and also with the aims of the Assessment Project.
  • These forms of conversations are designed to discover student views on the problems with existing assessment patterns and methods, as well as their reasons for preferring alternatives to them.
  • The conversations are also being used to explore the extent to which electronic methods of assessment can address identified assessment problems.

Context

Having used focus groups and surveys to provide initial qualitative data on our assessment practices, we noticed a widespread preference for alternatives to traditional exams (particularly the Learning Journal), and decided to investigate the reasons for this further. The second focus group and subsequent survey sought to identify why the Learning Journal in particular is so favoured by students, and we were keen to explore whether teaching and learning aims were perceived by students to be better achieved via this method than by the traditional exam. We also took the opportunity to ask students what they value most in feedback: the first focus group and survey had touched on this but we decided this time to give students the opportunity to select four elements of feedback which they could rank in order or priority. This produced more nuanced data.

Implementation

  • A second focus group was convened to gather more detailed views on the negative attitudes towards exams, and to debate alternatives to this traditional assessment method.
  • A series of questions was asked to generate data and dialogue.
  • A Survey Monkey was circulated to all DEL students with the same series of questions as those used for the focus group in order to determine whether the focus group’s responses were representative of the wider cohort.
  •  The Survey Monkey results are presented below. The numbers refer to student responses to a category (eg. graphic 1, 50 students selected option (b). Graphic 2 and graphic 5 allowed students to rank their responses in order or priority.

Results

  • Whilst only 17% in the focus group preferred to keep to the traditional exam + assessed essay method, the survey found the aversion to exams to be more prominent. 88% of students preferred the Learning Journal over the exam, and 88% cited the likelihood of reducing stress and anxiety as a reason for this preference.
  • Furthermore, none of the survey respondents wanted to retain the traditional exam + assessed essay method, and 52% were in favour of a three-way split between types of assessment; this reflects a desire for significant diversity in assessment methods.
  • We find it helpful to know precisely what students want in terms of feedback: ‘a clear indication of errors and potential solutions’ was the overwhelming response. ‘Feedback that intersects with the Module Rubric’ was the second highest scorer (presumably a connection between the two was identified by students).
  • The students in the focus group mentioned a desire to choose assessment methods within modules on an individual basis. This may be one issue in which student choice and pedagogy may not be entirely compatible (see below).
  • Assessed Essay method: the results seem to indicate that replacing an exam with a second assessed essay is favoured across the Programme rather than being pinned to one Part.

Reflections

The results in the ‘Feedback’ sections are valuable for DEL: they indicate that clarity, diagnosis, and solutions-focused comments are key. In addressing our feedback conventions and practices, this input will help us to reflect on what we are doing when we give students feedback on their work.

The results of the focus group and of the subsequent survey do, however, raise some concerns about the potential conflict between ‘student choice’ and pedagogical practice. Students indicate that they not only want to avoid exams because of ‘stress’, but that they would also like to be able to select assessment methods within modules. This poses problems because marks are in part produced ‘against’ the rest of the batch: if the ‘base-line’ is removed by allowing students to choose assessment models, we would lack one of the main indicators of level.

In addition, the aims of some modules are best measured using exams. Convenors need to consider whether a student’s work can be assessed in non-exam formats but, if an exam is the best test of teaching and learning, it should be retained, regardless of student choice.

If, however, students overwhelmingly choose non-exam-based modules, this would leave modules retaining an exam in a vulnerable position. The aim of this project is to find ways to diversify our assessments, but this could leave modules that retain traditional assessment patterns vulnerable to students deselecting them. This may have implications for benchmarking.

It may also be the case that the attempt to avoid ‘stress’ is not necessarily in students’ best interests. The workplace is not a stress-free zone and it is part of the university’s mission to produce resilient, employable graduates. Removing all ‘stress’ triggers may not be the best way to achieve this.

Follow up

  • DEL will convene a third focus group meeting in the Spring Term.
  • The co-leaders of the ‘Diversifying Assessments’ project will present the findings of the focus groups and surveys to DEL in a presentation. We will outline the results of our work and call on colleagues to reflect on the assessment models used on their modules with a view to volunteering to adopt different models if they think this appropriate to the teaching and learning aims of their modules
  • This should produce an overall assessment landscape that corresponds to students’ request for ‘three-way’ (at least) diversification of assessment.
  • The new landscape will be presented to the third focus group for final feedback.

Links

With thanks to Lauren McCann of TEL for sending me the first link which includes a summary of students’ responses to various types of ‘new’ assessment formats.

https://www.facultyfocus.com/articles/online-education/assessment-strategies-students-prefer/

Conclusions (May 2018)

The ‘Diversifying Assessment in DEL’ TLDF Mini-Project revealed several compelling reasons for reflecting upon assessment practice within a traditional Humanities discipline (English Literature):

  1. Diversified cohort: HEIs are recruiting students from a wide variety of socio-cultural, economic and educational backgrounds and assessment practice needs to accommodate this newly diversified cohort.
  2. Employability: DEL students have always acquired advanced skills in formal essay-writing but graduates need to be flexible in terms of their writing competencies. Diversifying assessment to include formats involving blog-writing, report-writing, presentation preparation, persuasive writing, and creative writing produces agile students who are comfortable working within a variety of communication formats.
  3. Module specific attainment: the assessment conventions in DEL, particularly at Part 2, have a standardised assessment format (33% assessed essay and 67% exam). The ‘Diversifying Assessment’ project revealed the extent to which module leaders need to reflect on the intended learning outcomes of their modules and to design assessments that are best suited to the attainment of them.
  4. Feedback: the student focus groups convened for the ‘Diversifying Assessment’ project returned repeatedly to the issue of feedback. Conversations about feedback will continue in DEL, particularly in relation to discussions around the Curriculum Framework.
  5. Digitalisation: eSFG (via EMA) has increased the visibility of a variety of potential digital assessment formats (for example, Blackboard Learning Journals, Wikis and Blogs). This supports diversification of assessment and it also supports our students’ digital skills (essential for employability).
  6. Student satisfaction: while colleagues should not feel pressured by student choice (which is not always modelled on academic considerations), there is clearly a desire among our students for more varied methods of assessment. One Focus Group student argued that fees had changed the way students view exams: students’ significant financial investment in their degrees has caused exams to be considered unacceptably ‘high risk’. The project revealed the extent to which Schools need to reflect on the many differences made by the new fees landscape, most of which are invisible to us.
  7. Focus Groups: the Project demonstrated the value of convening student focus groups and of listening to students’ attitudes and responses.
  8. Impact: one Part 2 module has moved away from an exam and towards a Learning Journal as a result of the project and it is hoped that more Part 2 module convenors will similarly decide to reflect on their assessment formats. The DEL project will be rolled out School-wide in the next session to encourage further conversations about assessment, feedback and diversification. It is hoped that these actions will contribute to Curriculum Framework activity in DEL and that they will generate a more diversified assessment landscape in the School.

Using grademark to write high quality feedback more rapidly in the school of law – Dr Annika Newnham

Profile picture for Dr Newnham

Dr Newnham is the module convenor for LLB Family Law. Her areas of interest include, Child Law, Autopoietic Theory, The Common Intention Constructive Trust.

Since 2015, Annika has gradually personalised the ‘Quickmarks’ function within Turnitin Grademark to be both discipline specific and also assignment specific. In addition, Dr Newnham has also developed a lengthy comments bank which she can draw on and personalise to ensure that she can write high quality feedback more quickly, speeding up the entire marking process.

OBJECTIVES

The School of Law currently operates online submission, marking and feedback for the vast majority of assessed work. As part of this process it makes extensive use of Turnitin Grademark and some of the functionality on offer, including Quickmarks. Given the large numbers of students submitting work within the School and the need to provide high quality feedback quickly, I wanted to use these new tools to speed up the entire marking process and support the quality and quantity of feedback offered.

CONTEXT

The School of Law recruits strongly, makes extensive use of summative assessment and maintains a large number of core modules. Online assessment has been adopted, in part, to help support the continued provision of high quality feedback in this context while ensuring that feedback is returned to students within 15 working days.

IMPLEMENTATION

Grademark allows for the customisation of Quickmarks by individuals markers. I very quickly began to customise the Quickmarks that were available to me by adding comments that I make frequently. Gradually, over time, my Quickmarks section has expanded to include a whole series of comments which range from just a few words to more lengthy sections of text. Dragging these across to relevant sections of text saves me a considerable amount of time because I’m not writing out the same type of comment again and again. I’ve even developed my set of Quickmarks to be specific not only to each module I teach but to each assignment I mark within that module. I carefully save each set with a different name so I can easily access them again. Grademark even remembers my Quickmarks sets from one year to the next so my collection appears automatically when I open each new essay.

I wanted to explore the possibilities of reducing marking time whilst maintaining the quality and quantity of feedback in other areas.
This approach worked very well for targeted in text comments throughout the essay but, like most markers, I also leave summative text in the general comments section in the Grademark sidebar so that students have a sense of my overall thoughts. I started to compile a lengthy list of comments that I use extensively in a simple and separate word document. I ordered each set under key headings. Some of these are generic for all essays: writing style, referencing, structure and so on. There are also sets of comments on how students have tackled a particular issue in law, for example how well they have presented balanced arguments on commercial surrogacy, or have understood the different stages of a cohabitant’s claim for a share in her ex-partner’s house. Each heading contains 8-10 different sentences or longer sections covering a wide range of different areas I may want to comment on. I am then able to cut and paste the most relevant into the Grademark comment box and, if needed, rewrite to suit the exact statement for the specific essay I’m working on. This process has become even more efficient since the arrival of a second screen. I can list my commonly used statement on the left hand screen, cut and paste or drag over to the actual essay on my right hand screen. Although I might then want to personalise the statement I still save a significant amount of time in comparison to typing everything out repeatedly for each essay.

IMPACT

I maintain a balance between the use of Quickmarks, my comments bank and specific comments written for each piece of work. Students should not receive exactly the same comments time and time again. Feedback should not become a highly mechanised process. But Quickmarks and comments banks can be used as a starting point or work alongside very specific comments written for a particular piece of work. In this way I can maintain the quality and quantity of my feedback whilst speeding up the marking process considerably. In particular, this approach seems to ensure greater consistency between essays in terms of the amount of feedback that each student receives because it is so much quicker and easier to insert comments. More broadly it feels like a far more efficient process and is certainly a more fulfilling task to undertake.

REFLECTIONS

Quickmarks and cut and paste comments have made marking feel much less like a chore; and removes the irritation often felt if you have to correct the same misunderstanding again and again to different students.

LINKS

Turnitin Quickmark

https://guides.turnitin.com/01_Manuals_and_Guides/Instructor_Gu ides/Turnitin_Classic_for_Instructors/25_GradeMark/QuickMark