An evaluation of online systems of peer assessment for group work

Cathy Hughes and Heike Bruton, Henley Business School
catherine.hughes@reading.ac.uk

Overview

Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.

Objectives

  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.

Context

There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.

Implementation

By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.

Findings

A brief overview of the findings is as follows:

iPeer

While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.

SPARKplus

SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.

WebPA

For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.

Reflections

LW2RPP – Research Placement Project

Dr. Stavroula Karapapa, Law
s.karapapa@reading.ac.uk

Overview

Research Placement Project (LW2RPP) is a module developed within the School of Law that aims to provide Part Two students with a hands-on experience of the academic research process, from the design of a project and research question through to the production of a research output. It is an optional module that combines individual student research, lectures and seminars.

Objectives

  • To provide students with a hands-on experience of the academic research process, from the design of a project and research question through to the production of a research output.
  • To provide a forum for the development of key research skills relating to the capacity to generate original knowledge.
  • To provide a forum for the development of key skills relating to the presentation of ideas in written form.
  • To give the opportunity to obtain an in-depth understanding of a specific applied topic of legal study.

Context

The module was initially developed as an alternative to Legal Writing Credit (LW2LWC) with a view to offer more optional modules to Law students at Part Two.

Implementation

The module has a unique learning design in that it introduces law students to semi-guided legal research through lectures, seminars and independent student learning. The lectures introduce students to research methods. Seminars are lead by experts in a particular area that have a strong interest in a specific topic because they currently carry out research on it. We have had a variety of topics offered throughout the four years that the module runs, spanning international law, criminal law, company law, media law, family law etc. Students are given the option to choose their group at the beginning of the academic year and to work on topics related to a specific research area.

During the module, students receive formative feedback on two occasions, as they are required to present a piece of preparatory work, such as a literature review or draft bibliography, in their second and third project supervision sessions, with these pieces forming the basis for discussion with their supervisor and with peers. Students are therefore able to use this formative feedback to direct their final output, an assessed essay of 10 pages.

Impact

The objectives of the activity have been met. Students have been acquainted with a particular research area and they have developed skills and some experience on legal research writing. Having colleagues deliver seminars on their current areas of research is valuable, as it showcases the wide variety of research in Law that takes place within the School and the subject more generally, and students respond well to this element of the module. The outputs that students produce have generally been of a good quality, and have demonstrated an ability to use appropriate methodologies to conduct and utilise independent research. Involvement in a research project of this nature at Part Two has been valuable for students to develop skills which they then continue to utilise at Part Three, particularly in their dissertation.

Reflections

The main force behind the success of the module is the contribution of the various colleagues that volunteer every year to offer some classes and group supervision to Part Two students.

Improving student engagement with assessment and feedback through peer review

Professor Helen Parish, School of Humanities
h.l.parish@reading.ac.uk

Year of activity: 2014-15

Overview

9070

The project investigated recent research and practice in peer assessment and feedback in order to implement a peer assessment model for use within History, and develop a framework for the adoption of said model in cognate disciplines where evaluation of substantial text-based assignments is an important part of assessment.

Objectives

  • Present students with well-managed opportunities to engage in feedback and assessment and learn from it.
  • Present staff with access to tried and tested models for implementation that can be used and tailored across disciplines.

Context

The importance of increasing the impact of assessment in feedback and learning is recognised by the University’s teaching and learning enhancement priorities, and is evident in the ‘Engage in Assessment’ and ‘Engage in Feedback’ materials.  The requirement to pursue an agenda for feedback is also highlighted by the expectations of employers that graduates of the University of Reading will be able to assess and evaluate the work of others, by comments on feedback made by University of Reading students in the National Student Survey, and by discussions with potential students on Open Days.

Implementation

There were five stages to the project:

  1. A literature search on the topic and detailed engagement with recent scholarship, undertaken by the Principal Investigator.
  2. A ‘competitor analysis’, undertaken by a research assistant, looking at the extent that peer feedback is present on Humanities curricula at other institutions.
  3. Development of a model for the trial of peer assessment informed by the previous two stages.
  4. Implementation of this model as a ‘pilot project’ in the Department of History.
  5. Obtaining student feedback on the process and reflection by the Principal Investigator.

The feedback gained during the early stages of the project revealed that students were reluctant to allow their work to be reviewed by their peers, even when anonymised.   This necessitated the envisaged model to be altered, whereby the written work being ‘peer reviewed’ was either from previous cohorts within the Department or alternative sources.

Once the pilot project was developed, there were three stages:

  1. Development of an understanding of marking and assessment criteria. Students read the assessment criteria of their module, and were then tasked with rewriting these in their own words.
  2. Applying these criteria to written work. Students then read a sample essay (not taken from the group), and with reference to the marking criteria, were asked to give a mark to the essay, with a summary of reasons they had come to this judgment.  This was followed by a discussion of the written feedback provided.
  3. Focus group and project review.  It was intended that students would meet to talk about the project, and more general issues to do with assessment and feedback, in the presence of an experienced observer external to the department.

Impact

One of the principal benefits of the project was that students became more aware of the marking criteria by which their assignments were assessed, as although they found these clear, few students had actually taken the time to read these before. An additional benefit was that the activity helped develop students’ academic confidence, as they were impelled to adopt a critical attitude to writing within scholarship, and gained experience of promoting their point of view to their peers.

Reflections

Feedback from questionnaires suggested that students enjoyed the project; that they now had a better understanding of assessment and feedback; that the project had been helpful with the preparation of their own written work; and that they were now more confident in the assessment of their own work prior to submission.

The reluctance of students to submit their own work to review by their peers meant that there was a less direct link between the peer feedback provided and the specific assignment for each module.  By using work from previous cohorts or alternative sources, however, it was possible to get students to engage more willingly with the process of peer review.

The main disappointment was that it proved impossible to gather a large enough group of students to participate in the focus group stage of the project.  This may have been due to the proposed scheduling of the focus groups at a time when students had recently participated in a Departmental Periodic Review and submitted their final coursework of the academic year.  Nevertheless, valuable feedback on the pilot was provided through questionnaires and verbal communication.

It was interesting to observe that students held broad spectrum of ideas about what constituted good work, arising from a lack of understanding about the criteria against which work is marked. From this perspective, the project was valuable, as students were familiarised with the marking criteria and how these applied to written pieces Students were able to look ‘behind the scenes’ at the marking process, with student applying the marking criteria as individuals, but then needing to decide as a group upon a final mark for pieces they were reviewing.

Follow up

Following the pilot project, the use of peer review to engage students in assessment and feedback has been used by other members of staff within the Department of History, with similar success. Other than the specific pieces of work and criteria used for peer review purposes, there was nothing within this project that was specific to the Department of History or School of Humanities, and so this activity could easily be adapted for use in other Departments and Schools across the University.

The peer review approach has been successfully applied within the Department of History to student presentations in seminars. As student presentations are more ‘in the moment’ and designed with a peer audience in mind, students have not expressed the same reticence to have their peers review their work, and those presenting have appreciated receiving immediate feedback.

Online peer assessment of group work tools: yes, but which one? By Heike Bruton (a TLDF project)

A short while ago I wrote the post “Group work: sure, but what about assessment? This outlines a TLDF- funded project in which Cathy Hughes and I investigated tools for the peer assessment of group work. Cathy and I have now produced a full report, which is available for download here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02), and summarised below.

 

Aim and methods

The aim of the project was to evaluate available online systems for the assessment of students’ contribution to group work. In order to establish our criteria for evaluation of these systems, we conducted a series of interviews with academics across the university. This allowed us an understanding of how peer assessment (PA) is used in a range of subjects, and what the different perspectives on the requirements for a computer-based system are.

 

Systems in use and evaluation criteria

Among our eleven interviewees we found five different separate PA systems (including Cathy’s own system) in use by six departments. Notably, Cathy’s tool appeared to be the only entirely computer-based system. Based on the insights gained from the interviews, we developed a set of criteria against which we evaluated available PA systems. These criteria are pedagogy, flexibility, control, ease of use, incorporation of evidence, technical integration and support, and security.

 

Available online systems

We identified three online tools not in use at the university at the moment, which implement PA specifically to the process, not the product, of group work. These three systems are iPeer, SPARKplus and WebPA. In addition we also critically assessed Cathy’s own system, which is already being used in several departments across the university. After investigating PA systems currently in use at Reading and applying the above-named criteria to the four PA system under investigation, we came to a number of conclusions, which resulted in a recommendation.

 

Conclusion

There is a strong sense of commitment among staff to using group work in teaching and learning across the university. PA can serve as a mechanism to recognise hard work by students and also to provide feedback aimed at encouraging students’ to improve their involvement with group work. Whilst any PA system is simply a tool, which can never replace the need for active engagement by academics in their group work projects, such a tool can make PA more effective and manageable, especially for large groups.

 

Recommendation

Our recommendation then is that WebPA should be considered for use within the university. Our research suggests that it could be adopted with relative ease, particularly given the strong and active community surrounding this open-source software.   While it may not be appropriate for everyone, we believe it could be a useful tool to enhance teaching and learning, potentially improving the experience of group work assessment for both staff and students.

Cathy and I will be delivering a number of Teaching and Learning seminars on PA of group work in the near future. To download the full report, click here (Cathy Hughes and Heike Bruton TLDF peer assessment report 2014 07 02). To try out a stand-alone demo version of WebPA, follow this link: http://webpaos.lboro.ac.uk/login.php

Cathy and Heike will be presenting their project in a TEL Showcase event in the spring term. Please check http://www.reading.ac.uk/cqsd/TandLEvents/cqsd-ComingSoon.aspx.

Group work: sure, but what about assessment? By Heike Bruton (a TLDF project)

Group work has many well-documented benefits for students, but it also provides considerable challenges. A frequent complaint from students is that differences in contributions are not recognised when everyone in the group receives the same mark – the free loader issue. However, when students are working unsupervised, it is very difficult for the tutor to gauge who contributed to what extent. This is where peer assessment of group work can be a key part of the assessment framework.

What’s this project all about?
Cathy Hughes from Real Estate & Planning has developed and implemented her own online system of peer assessment of group work, and has given presentations about it at various T&L events. With the help of an award from the Teaching and Learning Development Fund, Cathy appointed me as Research Assistant. Our hope is to find a sustainable system for those colleagues who wish to use it. This may mean developing Cathy’s system further, or possibly adopting a different system.

What peer assessment systems are staff currently using?
The first step of the project was to find out what peer assessment (PA) of group work tutors at the University of Reading are currently using. We conducted a number of interviews with colleagues who are currently using such systems, and we found a variety of systems in use (both paper-based and digital).  Most systems seem to work well in increasing student satisfaction through the perception of fairer marking, and encourage reflection. However, all such systems require quite a lot of effort by those administering them. While lecturers are unanimous in their estimation that peer assessment of group should be done for pedagogic reasons, unsurprisingly they also say that a less labour-intensive system than they are currently using would be highly desirable.

What peer assessment systems are out there?
Cathy and I investigated available peer assessment systems. After examining several digital tools, we identified one system which seems to tick all the boxes on the wish list for peer assessment of group work. This system is called WebPA. WebPA is an open source online peer assessment system which measures contribution to group work. It can be used via Blackboard and seems to be very flexible.

Where to go from here?
You can try out a stand-alone demo version here: http://webpaos.lboro.ac.uk/login.php. This site also contains links leading to further information about WebPA. We are currently putting our findings together in a report, and we will disseminate the results throughout the University.