Online peer assessment of group work tools: yes, but which one? By Heike Bruton (a TLDF project)

A short while ago I wrote the post “Group work: sure, but what about assessment? This outlines a TLDF- funded project in which Cathy Hughes and I investigated tools for the peer assessment of group work. Cathy and I have now produced a full report, which is available for download here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02), and summarised below.


Aim and methods

The aim of the project was to evaluate available online systems for the assessment of students’ contribution to group work. In order to establish our criteria for evaluation of these systems, we conducted a series of interviews with academics across the university. This allowed us an understanding of how peer assessment (PA) is used in a range of subjects, and what the different perspectives on the requirements for a computer-based system are.


Systems in use and evaluation criteria

Among our eleven interviewees we found five different separate PA systems (including Cathy’s own system) in use by six departments. Notably, Cathy’s tool appeared to be the only entirely computer-based system. Based on the insights gained from the interviews, we developed a set of criteria against which we evaluated available PA systems. These criteria are pedagogy, flexibility, control, ease of use, incorporation of evidence, technical integration and support, and security.


Available online systems

We identified three online tools not in use at the university at the moment, which implement PA specifically to the process, not the product, of group work. These three systems are iPeer, SPARKplus and WebPA. In addition we also critically assessed Cathy’s own system, which is already being used in several departments across the university. After investigating PA systems currently in use at Reading and applying the above-named criteria to the four PA system under investigation, we came to a number of conclusions, which resulted in a recommendation.



There is a strong sense of commitment among staff to using group work in teaching and learning across the university. PA can serve as a mechanism to recognise hard work by students and also to provide feedback aimed at encouraging students’ to improve their involvement with group work. Whilst any PA system is simply a tool, which can never replace the need for active engagement by academics in their group work projects, such a tool can make PA more effective and manageable, especially for large groups.



Our recommendation then is that WebPA should be considered for use within the university. Our research suggests that it could be adopted with relative ease, particularly given the strong and active community surrounding this open-source software.   While it may not be appropriate for everyone, we believe it could be a useful tool to enhance teaching and learning, potentially improving the experience of group work assessment for both staff and students.

Cathy and I will be delivering a number of Teaching and Learning seminars on PA of group work in the near future. To download the full report, click here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02). To try out a stand-alone demo version of WebPA, follow this link:

Cathy and Heike will be presenting their project in a TEL Showcase event in the spring term. Please check

Coursework redesign for an integrated multidisciplinary module

Dr Mark Dallas, School of Chemistry, Food and Pharmacy


9239Within the School of Chemistry, Food and Pharmacy, coursework on a Part Two Pharmacy module, Therapeutics and Medicines Optimisation B (PM2B), was redesigned to reflect the multidisciplinary nature of the new module. In their assessed work, students demonstrated a better appreciation of the interconnectivity of the disciplines of Pharmacy, and students also expressed their enjoyment of the redesigned assessment.


  • Redesign coursework to reflect the multidisciplinary nature of PM2B.
  • Implement and assess a learning exercise that allows Pharmacy students to integrate their understanding of different Pharmacy disciplines.


In 2011 the General Pharmaceutical Council (GPhC), the regulating body for the pharmacy profession within England, Scotland and Wales and the body responsible for accreditation of the Masters of Pharmacy degree course at the University of Reading, adopted a new set of standards for the initial education and training of pharmacists. The first criteria of Standard 5 stressed the need for integrated curricula. With modules within Pharmacy at the University of Reading being altered to reflect these standards, the existing coursework structures were not suited, as they would not have aligned to the joint nature of the new modules.


The aims, delivery and assessment of the module’s coursework were completely redesigned. Previously, students had been assessed solely by a written report, and the datasets used only reflected one discipline of pharmacy.

The new coursework that was devised was aligned with a modern day multidisciplinary drug discovery programme, with the intention being that this would allow students to appreciate the integrative nature of pharmacy as a science, and the multidisciplinary nature of their subject.

There were four assessed components that comprised the module’s coursework. A project report contributed 50% of the coursework final mark; a poster presentation 20%; reflective diaries 15%; and engagement 15%. By having multiple types of assessment it was hoped that students would engage with the topics, and that it would promote deep learning, while allowing students an opportunity to demonstrate a variety of skill sets. The poster presentation and project reports saw students assessed as groups. To assess engagement, a rubric was created, rating students on their academic engagement and their group engagement based on clearly defined criteria.


The redesigned assessment was enjoyed by students, and in their assessed work students consistently demonstrated a sufficient understanding of the interconnectivity of the disciplines of Pharmacy. Marks on the written report, however, were lower than had been hoped, and suggested that some adjustment to this aspect of assessment were necessary.


Having the coursework comprise different assessment types was valuable as it allowed staff to gain an insight into student knowledge retention, critical thinking, and their ability to work in a wider context.

The written report represented an assessment format with which students would be familiar, given the format of assessments at Part One. The value of having students produce a written report was that it allowed students to be tested on their application, rather than simple obtainment, of knowledge to address a problem.

Having students produce a poster presentation as part of their assessment on the module encouraged students to utilise different skills in their work. Communication skills, which had previously not been assessed in the module, but are an important skill that the University of Reading seeks to develop in its graduates, became a central element of assessment. By having to produce a poster that would then be presented to their peers, students were encouraged to engage deeply with the topic, and to take pride in what they created, and created an opportunity for peer learning.

Having group work as an assessed element was of great value in a multidisciplinary module. With group members having different strengths within the group, they are able to make a valuable contribution, and benefit from learning from others’ strengths in turn. While group work does introduce the possibility for ‘free-riding’, whereby students do not engage and instead rely on the rest of the group to deliver a good final mark, and this was something that students commented on in their feedback, the strength of group assessment is the key communication and collaborative skills it demands.

The creation of reflective diaries is especially pertinent to students in healthcare professions, as reflective writing is a central element of their continuing professional development. An additional and unforeseen benefit of this assessment was the insight it provided into students’ thought processes, which was valuable for making adjustments to the module.

Assessing student engagement was one of the challenging aspects of the redesigned assessment. Having a clear rubric made marking a more objective process.

Follow up

To address the issues that were introduced by having group work assessed, a session on group dynamics has been introduced to the module in order to better set expectations. The skills addressed in this session will be valuable to students not only in this module, but can also be applied across their academic and professional experience.

A further innovation has been the use of online project management tools. This has both allowed students to better manage their work and engagement, and also allows assessors access to evidence to help with marking, and allows group work to be better monitored.

Flipping assessment?! by Dr Karen Ayres

Like many colleagues, I have attended a number of interesting talks on the ‘flipped classroom’ approach, whereby, in a role reversal, the main delivery of information takes place outside of the classroom, and the contact time is used instead for reinforcing learning. I haven’t quite identified yet how I can make use of this approach in my own teaching, but I have been inspired to try ‘flipping’ an assessment in one of my modules. Admittedly this may be the wrong terminology to use here, but what I mean by this is a role reversal when it comes to assessment. In one of my modules this year, instead of asking students to produce a guide on using a statistics computing package, which I would usually then assess for clarity, accuracy and effectiveness as a training resource, I instead provided students with a piece of work I had created (with deliberate errors and other problems!) and asked them to assess it as if they were the lecturer.

The approach of engaging students in marking is of course not new, since peer marking is used by many lecturers. However, this was not a standard peer marking exercise, because I did not provide them with a marking scheme, nor a set of solutions to use. I left it to the students to decide how they wanted to split up the 100 marks, and what they wanted to award marks for. By doing it this way, my aim was to see whether they knew what the key elements of an effective training guide was, by showing how they thought one should be marked. They were also asked to provide effective feedback on the work, on the understanding that feedback should be constructive and should benefit learning, and that the feedback should justify the mark they awarded (I didn’t use the term ‘feed-forward’, but did ask them to consider what they would find useful if the work being commented on was their own). My aim here was to determine whether they understood how the key elements of an effective training guide should be put into practice, and also to see if they were able to identify technical inaccuracies in the work. It is this last point which I feel the flipped assessment approach may be particularly beneficial for. Often students may misunderstand something but not include it in their own piece of work, meaning that this misunderstanding escapes identification. By asking that they mark work which includes errors, and by requiring that they give feedback about why it’s an error, I feel that I’m demanding a deeper level of subject knowledge from them than I would be doing in a traditional assignment. Of course, it’s then important that I go through these errors with them afterwards, to make sure that no misunderstandings have been created!

I’m pleased to report that I was very impressed with what my students did on this assignment (obviously I had to assess their assessment!). It was a group assignment, and all groups produced a very detailed marking scheme, in a grid layout – I hadn’t given them any pointers on this, so the fact that they decided to do it like this was encouraging. The written feedback that they provided on the script they were given was similarly impressive, and in some cases of the same standard that my colleagues and I routinely provide. What was more interesting was the fact that alongside their various annotations on the script, they provided a separate, very detailed, document listing errors and issues with the work, including further feed-forward comments. If students all expect this multiple level of detailed feedback on their own work as standard, this might explain why some are unhappy with the (still reasonably detailed) feedback they do receive!

In summary, my aim in designing an assessment in a ‘flipped’ way was to encourage a deeper level of thought, and to assess a deeper level of understanding, than I felt was achieved by the usual approach. I feel that those who are tasked with assessing the knowledge and learning of others need to have a deeper than usual understanding of both the technical and communication sides of the discipline (certainly in mathematics and statistics). After the success of this trial run I will definitely be looking at how else I can use this different type of assessment in my other modules. My next step is to consider how to use something like this for a quantitative assignment, for example by asking them to both produce their own set of solutions with marking scheme, and then to use them to mark my piece of work that I submit to them for assessment!

I-TUTOR: Intelligent Tutoring for Lifelong Learning

The University of Reading is a project partner in a prestigious project to develop a multi-agent based intelligent tutoring system to support online teachers, trainers, tutors and learners: I-TUTOR.

I-TUTOR, which stands for Intelligent Tutoring for Lifelong Learning is to be applied in open source learning environments, and will monitor, track, analyze and give formative assessment and feedback to students within the learning environment while giving input to tutors and teachers involved in distance learning to enhance their role during the process of teaching. Find out more on the project blog and website at

Funded with support from the European Commission, the project started in January 2012 and is a partnership between the University of Macerata as coordinating institution, and the University of Palermo, University of Reading, Budapest University of Technology and Economics, ITEC, Militos, and Eden.

Reaching the midterm in the project, the partnership has published its first project newsletter to share the results achieved – including a comprehensive study of intelligent tutoring systems as well as an open source code for a survey chatbot that anyone is welcome to test.

The team would welcome any feedback and suggestions. To find out more, or let them know what you think, contact Karsten Lundqvist, Lecturer in the School of Systems Engineering here at Reading.

Using technology to find low-tech solutions by Mary Morrissey

Like a lot of people, I do not consider myself particularly savvy about technology: when I find that something is useful to me, I learn how to use it. That said, I think we can use learning technologies to come up with ‘low tech’ solutions to our teaching needs. Among the advantage is efficiency in terms of time and money: we already have the kit, and we know how to use it. I offer the following as an example.

It is often difficult to make sure that students are aware of detailed regulations that affect their work but which cannot be summarised or displayed easily. Conventions for writing and referencing are a good example in our department.  Last summer, Pat Ferguson (our Royal Literary Fund fellow whose role in the department is to help student improve their writing skills) observed that we had excellent advice on essay writing, but it was in our large Student Handbook, distributed at the start of the first year. Pat suggested that we make this information available separately.

I thought this was a great idea. I noticed there was other information in the handbook that students need through their degree too: there was information about our marking criteria; there were some very helpful examples that showed the difference between plagiarism and poor academic practice. I took these sections, and I created three separate documents with titles that I hoped would be self-explanatory: ‘Style Guide for English Literature students’; ‘Understanding Feedback – Marking Criteria’; and ‘Plagiarism’.

I uploaded all three documents to Blackboard’s ‘Fileshare’ area for the department, and I created links from the Blackboard courses for all our Part 1 and Part 2 modules. (I am working on the Part 3 modules, but there are over 50 of those!) I also posted the documents in our central ‘Information for English Literature Students’ Blackboard organisation, on which all staff, undergraduates and postgraduate students are enrolled. By keeping the documents in ‘Fileshare’ I can update them every year, to include new ‘standard paragraphs’ for example. I overwrite the old file with the newer version, and all the daughter versions linked to it update automatically.

This isn’t rocket science, but I think it has helped us make useful information more readily available. Having in posted in most of our Blackboard courses makes it more visible; having three small documents (in pdf format) makes them easier to download and print.

Where would I go from here? Students have told me that they like a website with exercises that help with grammar and writing skills that we recommended. It’s based in the University of Bristol:

I would like to create an interactive resource like this, and I know it can be done. The University of Aberdeen took the paper-based ‘Guide to Written Work’ (on which we all relied when I worked there!) and turned it into an internet-based resource with exercises:

If anyone knows any low-tech ways that I could do something similar, please let me know!