Cathy Hughes and Heike Bruton, Henley Business School
catherine.hughes@reading.ac.uk

Overview

Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.

Objectives

  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.

Context

There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.

Implementation

By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.

Findings

A brief overview of the findings is as follows:

iPeer

While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.

SPARKplus

SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.

WebPA

For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.

Reflections