Clinical skills development: using controlled condition assessment to develop behavioural competence aligned to Miller’s pyramid

Kat Hall,  School of Chemistry, Food and Pharmacy,


The Centre for Inter-Professional Postgraduate Education and Training (CIPPET) provide PGT training for healthcare professionals through a flexible Masters programme built around blended learning modules alongside workplace-based learning and assessment.  This project aimed to evolve the department’s approach to delivering one of our clinical skills workshops which sits within a larger 60 credit module.  The impact was shown via positive student and staff feedback, as well as interest to develop a standalone module for continuing further learning in advanced clinical skills.


The aim of this project was to use controlled condition assessment approaches to develop behavioural competence at the higher levels of Miller’s pyramid of clinical competence 1.

Miller’s Pyramid of Clinical Competence

The objectives included:

  1. engage students in enquiry by promoting competence at higher levels of Miller’s pyramid
  2. develop highly employable graduates by identifying appropriate skills to teach
  3. evolve the workshop design by using innovative methods
  4. recruit expert clinical practitioners to support academic staff


Health Education England are promoting a national strategy to increase the clinical skills training provided to pharmacists, therefore this project aimed to evolve the department’s approach to delivering this workshop.  The current module design contained a workshop on clinical skills, but it was loosely designed as a large group exercise which was delivered slightly differently for each cohort.  This prevented students from fully embedding their learning through opportunities to practise skills in alongside controlled formative assessment.


Equipment purchase: As part of this project matched funding was received from the School to support the purchase of simulation equipment which meant a range a clinical skills teaching tools could be utilised in the workshops.  This step was undertaking collaboratively with the physician associate programme to share learning and support meeting objective 2 across the School.

Workshop design: the workshops were redesigned by the module convenor, Sue Slade, to focus on specific aspects of clinical skills that small groups could focus on with a facilitator.  The facilitators were supported to embed the clinical skills equipment within the activities therefore promoting students in active learning activities.  The equipment allowed students the opportunity to simulate the skills test to identify if they could demonstrate competence at the Knows How and Shows How level of Miller’s Pyramid of Clinical Competence.  Where possible the workshop stations were facilitated by practising clinical practitioners.  This step was focused on meeting objectives 1, 2, 3 and 4.

Workbook design: a workbook was produced that students could use to identify core clinical skills they required in their scope of practice and thus needed to practise in the workshop and further in their workplace-based learning.  This scaffolding supported their transition to the Does level of Miller’s Pyramid of Clinical Competence.  This step was focused on meeting objectives 1 and 3.


All four objectives were met and have since been mapped to the principles of Curriculum Framework to provide evidence of their impact.

Mastery of the discipline / discipline based / contextual: this project has supported the academic team to redesign the workshop around the evolving baseline core knowledge and skills required of students.  Doing this collaboratively between programme teams ensures it is fit for purpose.

Personal effectiveness and self-awareness / diverse and inclusive: the positive staff and student feedback received reflects that the workshop provides a better environment for student learning, enabling them to reflect on their experiences and take their learning back to their workplace more easily.

Learning cycle: the student feedback has shown that they want more of this type of training and so the team have designed a new stand-alone module to facilitate extending the impact of increasingly advanced clinical skills training to a wider student cohort.


What went well? The purchase of the equipment and redesigning the workshop was a relatively simple task for an engaged team, and low effort for the potential return in improved experience.  By having one lead for the workshop, whilst another wrote the workbook and purchased the equipment, this ensured that staff across the team could contribute as change champions.  Recruitment for an advanced nurse practitioner to support the team more broadly was completed quickly and provided support and guidance across the year.

What did not go as well?  Whilst the purchase of the equipment and workshop redesign was relatively simple, encouraging clinical practitioners to engage with the workshop proved much harder.  We were unable to recruit consistent clinical support which made it harder to fully embed the project aims in a routine approach to teaching the workshop.  We considered using the expertise of the physician associate programme team but, as anticipated, timetabling made it impossible to coordinate the staffing needs.

Reflections: The success of the project lay in having the School engaged in supporting the objectives and the programme team invested in improving the workshop.  Focusing this project on a small part of the module meant it remained achievable to complete one cycle of change to deliver initial positive outcomes whilst planning for the following cycles of change needed to fully embed the objectives into routine practice.

Follow up

In planning the next series of workshops, we plan to draw more widely on the University alumni from the physician associate programme to continue the collaborative approach and attract clinical practitioners more willing to support us who are less constrained by timetables and clinical activities.

Based on student and staff feedback there is clearly a desire for more teaching and learning of this approach and being able to launch a new standalone module in 2020 is a successful output of this project.

Links and References

Miller, G.E. (1990). The assessment of clinical skills/competence/performance. Acad Med, 65(9):S63-7.

Electronic Management of Assessment: Creation of an e-Portfolio for PWP training programmes

Tamara Wiehe, Charlotte Allard & Hayley Scott (PWP Clinical Educators)

Charlie Waller Institute; School of Psychology and Clinical Language


In line with the University’s transition to Electronic Management of Assessment (EMA), we set out to create an electronic Portfolio (e-Portfolio) for use on our Psychological Well-being Practitioner (PWP) training programmes to replace an existing hard-copy format. The project spanned almost 1 year (October 2018- September 2019) as we took the time to consider the implications on students, supervisors in our IAPT NHS services, University administrators and markers. Working closely with the Technology Enhanced Learning (TEL) team led us to a viable solution that has been launched with our new cohorts from September 2019.

Image of portfolio template cover sheet


  • Create an electronic Portfolio in line with EMA that overcomes existing issues and improves the experience for students, NHS supervisors, administrators and markers.
  • Work collaboratively with our all key stakeholders to ensure that the new format satisfies their various needs.


A national requirement for PWPs is to complete a competency-based assessment in the form of a Portfolio that spans across their three modules of their training. Our students are employed by NHS services across the South of England and many live close to their service rather than the University.

The issue? The previous hard-copy format meant that students spent time and money printing their work and travelling to the University to submit/re-submit it. University administrators and markers reported issues with transporting the folders to markers and storing them, especially with the larger cohorts.

The solution… To resolve these issues by transitioning to an electronic version of the Portfolio.


  1. October 2018: An initial meeting with TEL was held in order to discuss the practicalities of an online Portfolio submission.
  2. October 2018 – March 2019: TEL created several prototypes of options for submission via Blackboard including the use of the journal tool and a zip file. Due to practicalities, the course team decided on a single-file word document template.
  3. April – May 2019: Student focus groups were conducted with both programmes (undergraduate and postgraduate) where the same assessment sits to gain their feedback with the potential solution we had created. Using the outcomes of the focus groups and staff meetings, it was unanimously agreed that the proposed solution was a viable option for use with our future cohorts.
  4. June 2019: TEL delivered a training session for staff and admin to become familiar with the process from both student and staff perspective. TEL also created a guidance document for administrators on how to set up the assignment on Blackboard.
  5. July – August 2019: Materials including the template and rubrics were amended and formatted in order to meet requirements for online submission for both MSci and PWP courses. Resources were also created for students to access on Blackboard such as screen casts on how to access, utilise and submit the Portfolio using the electronic format; the aim of this is to improve accessibility for all students participating on the course.
  6. September 2019: Our IAPT services were notified of the changes as the supervisors there are responsible for reviewing and ‘signing off’ on the student’s performance before the Portfolio is submitted to the University for a final check.

Image of 'how to' screen cast resources on Blackboard


Thus far, the project has achieved the objectives it set out to. The template for submission is now available for students to complete throughout their training course. This will modernise the submission process and be less burdensome for the students, supervisors, administrators and markers.

Image of the new portfolio process

The students in the focus group reported that this would significantly simplify the process and relieve the barriers they often reported with completing and submitting the Portfolio. Currently, there have not been any unexpected outcomes with the development of the Portfolio. However, we aim to review the process with the first online Portfolio submission in June 2020.


Upon reflection, the development of the online Portfolio has so far been a success. Following student feedback, we listened to what would improve their experience of completing the Portfolio. From this we developed an online Portfolio, meeting the requirements across two BPS accredited courses which will be used for future cohorts of students.

Additionally, the collaboration between staff, students and the TEL team, has led to improved communication across teams with new ideas shared; this is something we have continued to incorporate into our teaching and learning projects.

An area to develop for the future, would be to utilise a specific Portfolio software. Initially, we wanted to use a journal tool on Blackboard, however, it was not suitable to meet the needs of the course (most notably exporting the submission and mark sheet to external parties). We will continue to review these options and will continue to gain feedback from future cohorts.


‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Rita Balestrini and Elisabeth Koenigshofer, School of Literature and Languages, r.balestrini@reading;


Working in collaboration with two Final Year students, we designed two ‘flexible’, ‘minimalist’ rubric templates usable and adaptable across different languages and levels, to provide a basis for the creation of level specific, and potentially task specific, marking schemes where sub-dimensions can be added to the main dimensions. The two marking templates are being piloted this year in the DLC. The project will feature in this year’s TEF submission.


Design, in partnership with two students, rubric templates for the evaluation and feedback of writing tasks and oral presentations in foreign languages which:

  • were adaptable across languages and levels of proficiency
  • provided a more inclusive and engaging form of feedback
  • responded to the analysis of student focus group discussions carried out for a previous TLDF-funded project


As a follow-up to a teacher-learner collaborative appraisal of rubrics used in MLES, now DLC, we designed two marking templates in partnership with two Final Year students, who had participated in the focus groups from a previous project and were employed through Campus Jobs. ‘Acknowledgement of effort’, ‘encouragement’, ‘use of non-evaluative language’, ‘need for and, at the same time, distrust of, objective marking’ were recurrent themes that had emerged from the analysis of the focus group discussions and clearly appeared to cause anxiety for students.


We organised a preliminary session to discuss these findings with the two student partners. We suggested some articles about ‘complexity theory’ as applied to second language learning, (Kramsch, 2012; Larsen-Freeman, 2012; 2015a; 2015b; 2017) with the aim of making our theoretical perspective explicit and transparent to them. A second meeting was devoted to planning collaboratively the structure of two marking schemes for writing and presentations. The two students worked independently to produce examples of standard descriptors which avoided the use of evaluative language and emphasised achievement rather than shortcomings. At a third meeting they presented and discussed their proposals with us. At the last meetings, we continued working to finalise the templates and the two visual learning charts they had suggested. Finally, the two students wrote a blog post to recount their experience of this collaborative work.

The two students appreciated our theoretical approach, felt that it was in tune with their own point of view and that it could support the enhancement of the assessment and marking process. They also found resources on their own, which they shared with us – including rubrics from other universities. They made valuable suggestions, gave us feedback on our ideas and helped us to find alternative terms when we were struggling to avoid the use of non-evaluative language for our descriptors. They also suggested making use of some visual elements in the marking and feedback schemes in order to increase immediateness and effectiveness.


The two marking templates are being piloted this year in the DLC. They were presented to colleagues over four sessions during which the ideas behind their design were explained and discussed. Further internal meetings are planned. These conversations, already begun with the previous TLDF-funded project on assessment and feedback, are contributing to the development of a shared discourse on assessment, which is informed by research and scholarship. The two templates have been designed in partnership with students to ensure accessibility and engagement with the assessment and feedback process. This is regarded as an outstanding practice in the ‘Assessment and feedback benchmarking tool’ produced by the National Union of Students and is likely to feature positively in this year’s TEF submission.


Rubrics have become mainstream, especially within certain university subjects like Foreign Languages. They have been introduced to ensure accountability and transparency in marking practices, but they have also created new problems of their own by promoting a false sense of objectivity in marking and grading. The openness and unpredictability of complex performance in foreign languages and of the dynamic language learning process itself are not adequately reflected in the detailed descriptors of the marking and feedback schemes commonly used for the objective numerical evaluation of performance-based assessment in foreign languages. As emerged from the analysis of focus group discussions conducted in the department in 2017, the lack of understanding and engagement with the feedback provided by this type of rubrics can generate frustration in students. Working in partnership with them, rather than simply listening to their voices or seeing them as evaluators of their own experience, helped us to design minimalist and flexible marking templates, which make use of sensible and sensitive language, introduce visual elements to increase immediateness and effectiveness, leave a considerable amount of space for assessors to comment on different aspects of an individual performance and provide ‘feeding forward’ feedback. This type of ‘partnership’ can be challenging because it requires remaining open to unexpected outcomes. Whether it can bring about real change depends on how its outcomes are going to interact with the educational ecosystems in which it is embedded.

Follow up

The next stage of the project will involve colleagues in the DLC who will be using the two templates to contribute to the creation of a ‘bank’ of descriptors by sharing the ones they will develop to tailor the templates for specific stages of language development, language objectives, language tasks, or dimensions of student performance. We also intend to encourage colleagues teaching culture modules to consider using the basic structure of the templates to start designing marking schemes for the assessment of student performance in their modules.


An account written by the two students partners involved in the project can be found here:

Working in partnership with our lecturers to redesign language marking schemes

The first stages of this ongoing project to enhance the process of assessing writing and speaking skills in the Department of Languages and Cultures (DLC, previously MLES) are described in the following blog entries:

National Union of Students 2017. The ‘Assessment and feedback benchmarking tool’ is available at:


Bloxham, S. 2013. Building ‘standard’ frameworks. The role of guidance and feedback in supporting the achievement of learners. In S. Merry et al. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

Bloxham, S. and Boyd, P. 2007. Developing effective assessment in Higher Education. A practical guide. Maidenhead: McGraw-Hill International.

Bloxham, S., Boyd, P. and Orr, S. 2011. Mark my words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education 36 (6): 655-670.

Bloxham, S., den-Outer, B., Hudson J. and Price M. 2016. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment in Higher Education 41 (3): 466-481.

Brooks, V. 2012. Marking as judgement. Research Papers in Education. 27 (1): 63-80.

Gottlieb, D. and Moroye, C. M. 2016. The perceptive imperative: Connoisseurship and the temptation of rubrics. Journal of Curriculum and Pedagogy 13 (2): 104-120.

HEA 2012. A Marked Improvement. Transforming assessment in HE. York: The Higher Education Academy.

Healey, M., Flint, A. and Harrington K. 2014. Engagement through partnership: students as partners in learning and teaching in higher education. York: The Higher Education Academy.

Kramsch, C. 2012. Why is everyone so excited about complexity theory in applied linguistics? Mélanges 33: 9-24.

Larsen-Freeman, D. 2012. The emancipation of the language learner. Studies in Second Language Learning and Teaching. 2(3): 297-309.

Larsen-Freeman, D. 2015a. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Larsen-Freeman, L. 2015b. Complexity Theory. In VanPatten, B. and Williams, J. (eds.) 2015. Theories in Second Language Acquisition. An Introduction. New York: Routledge: 227-244.

Larsen-Freeman, D. 2017. Just learning. Language Teaching 50 (3): 425-437.

Merry, S., Price, M., Carless, D. and Taras, M. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

O’Donovan, B., Price, M. and Rust, C. 2004. Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education 9 (3): 325-335.

Price, M. 2005. Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education 30 (3): 215-230.

Sadler, D. R. 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment and evaluation in Higher Education 34 (2): 159-179.

Sadler, D. R. 2013. The futility of attempting to codify academic achievement standards. Higher Education 67 (3): 273-288.

Torrance, H. 2007. Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education 14 (3): 281-294.

VanPatten & J. Williams (Eds.) 2015. Theories in Second Language Acquisition, 2nd edition. Routledge: 227-244.

Yorke, M. 2011. Summative assessment dealing. Dealing with the ‘Measurement Fallacy’. Studies in Higher Education 36 (3): 251-273.

Capturing and Developing Students’ Assessment Literacy

Hilary Harris, Maria Danos, Natthapoj Vincent Trakulphadetkrai, Stephanie Sharp, Cathy Tissot, Anna Tsakalaki, Rowena Kasprowicz – Institute of Education


The Institute of Education’s (IoE) T&L Group on Assessment Literacy worked collaboratively with 300+ students to ascertain the clarity level of assessment criteria used in all programmes across the IoE.  The findings were used to develop a report containing key findings and recommendations, which were then shared with programme directors. The findings also fed into the development of a Glossary of Common Assessment Terms to help develop students’ assessment literacy. SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting.


The aims of the activity were to:

  • Develop students’ assessment literacy, specifically in terms of their understanding of assessment criteria which are used in marking rubrics
  • Engage students in reviewing the clarity of assessment criteria and terms used in marking rubrics
  • Engage programme directors in reflecting on the construction of their marking rubrics
  • Develop an IoE-wide glossary of common assessment terms


The IoE has set up T&L Groups to enhance different aspects of our teaching and learning practices as part of the peer review process. The T&L Group on Assessment Literacy has been meeting since 2017, and is made up of seven academics from a wide range of undergraduate and postgraduate programmes.

As marking rubrics are now used for all summative assessments at the IoE (and to some extent across the University), ensuring that students have a good understanding of the embedded assessment terms matters as the criteria inform students of what is expected of them for a particular assessment. Moreover, the marking rubrics can also be used by students to develop their draft work before submission.


The Group asked 300+ students across all the IoE programmes to indicate the clarity level of their programme’s assessment criteria by circling any terms on the marking rubric that they were confused by. The Group collated the information and created a summary table for each programme, ranking assessment terms according to how often the terms were highlighted by the students.  Each group member then wrote a brief report for each programme with key findings and recommendations on alternative assessment terms that are clearer (e.g. to replace ‘recapitulation’ with ‘summary’; ‘perceptive’ with ‘insightful’, etc.). In some other cases where the use of specific terminology is essential (e.g. scholarship or ethics), the Group’s advice is for module convenors to spend some time within classroom to explain such terms to students and refer students to the assessment glossary for further support and examples. Both the Report and the Glossary were disseminated to programme directors and their teams, who were then able to use the evidence in the report to reflect on their programme’s assessment criteria and consider with their team any changes that they would like to make that would make the marking rubric more accessible and easier to understand by the students.


At the IoE, the work has already made an impact in that programme directors have reflected on their assessment criteria alongside their teams and have acted on the Group’s recommendations (e.g. replacing problematic terms in their marking rubrics with terms that are easier to understand by students.) The Glossary has been used by IoE programme directors and module convenors when introducing the assessment and their marking rubrics. The Glossary has also been uploaded onto Blackboard for students to consult independently. The feedback from students on the Glossary has also been very positive. For example, one student commented that “The definitions were useful and the examples provided were even more helpful for clarifying exactly what the terms mean. The glossary is laid out in a clear and easy to follow way for each key term”.

Beyond the IoE, impact is being generated. Specifically, SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting. The Group has been invited to give talks on its work at CQSD events and the School of Law’s T&L seminar. The Group is also currently working with academic colleagues at other universities (nationally and internationally) to replicate this Group’s work and generate impact beyond the UoR.


The activity was very successful as:

  • The Group had a clear focus of what it wanted to achieve
  • The Group was given time to carry out its work
  • There was strong leadership of the team, with each member being allocated specific contributions to the project

The process of involving students in reviewing terms on marking rubrics has empowered them to treat the documents critically and start a conversation with their lecturers about the purpose of marking rubrics, as well as being involved in as partners in making the marking rubric work for them.

There were some challenges that needed to be overcome/ ideas for improving the project:

  • When presented to colleagues at the Staff Day, some members of staff expressed the view that ‘tricky’ terms should be retained as developing an understanding of these terms is part of the transition to HE study. This was recognised in our report which suggests that technical terms (e.g. methodology) could be retained provided that they are explained to students.

Follow up

The Group plans to spend the 2019/2020 academic year generating and capturing the impact of its work across and beyond the UoR.

Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola


For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.


I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.


I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.


My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.


I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.


On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.


• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3.
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
• Blackboard, ‘Grade with Rubrics’
• Blackboard, ‘Import and Export Rubrics’

An evaluation of online systems of peer assessment for group work

Cathy Hughes and Heike Bruton, Henley Business School


Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.


  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.


There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.


By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.


A brief overview of the findings is as follows:


While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.


SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.


For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.


Reviewing assessment and feedback in Part One: getting assessment and feedback right with large classes

Dr Natasha Barrett, School of Biological Sciences
Year(s) of activity: 2010/11


  • Review the quantity, type and timing of assessments carried out in compulsory modules taken by students in the School of Biological Sciences.
  • Recommend better practices for assessment and feedback.


The massification and marketisation of Higher Education means that it is increasingly important that the University of Reading perform well in term of student satisfaction and academic results. The National Student Surveys between 2005 and 2011 and the Reading Student Survey of 2008 and the National Student Survey both indicated that assessment and feedback were areas in which the University of Reading and the School of Biological Sciences needed to improve.


Managing transition to the MPharm Degree

Dr John Brazier, Chemistry, Food and Pharmacy


Image of students smiling and learning The MPharm degree at the University of Reading has a diverse student cohort, in terms of both ethnicity and previous academic experience. During the most recent development of our programme, we have introduce a Part One assessment strategy that is focused on developing an independent learning approach.


  • To use a formative assessment strategy to encourage independent learning.
  • To use timetabling to ease the transition to higher education.
  • To reduce students’ fixation on their grades, and encourage them to instead focus on feedback.


It was clear from Part Two results that our students were not progressing from Part One with the necessary knowledge and skill set to succeed on the MPharm course. The ability to pass Part One modules while underperforming in exams was identified as a key issue. The reliance of the students on standard information provided during lectures, and the inability to study outside of this standard information was impacting on students’ final grades.


When designing our programme, we introduced a requirement to not only pass each module at 40%, but also to pass each examination with a mark of at least 40%. It was felt that this would ensure that students in Part Two would be equipped with the basic knowledge to succeed, and allow them to concentrate on developing the higher level skills required for Parts Three and Four, rather than having to return to Part One material due to their lack of knowledge. The requirement to pass the examination with a mark of at least 40% was a challenge; therefore we developed a formative/diagnostic assessment strategy to support the students throughout the year. In order to ease the transition from further education to university level, we designed a timetable that initially required students to attend teaching sessions intensively for the first five weeks, but then reduced gradually over the following four weeks and terms. This would allow us to direct their learning during the first few weeks of term, and then allow time for them to develop their independence once familiar with university life. Diagnostic and formative assessment points were spaced throughout the two teaching terms, starting with in-class workshops and tutorials and online Blackboard tests. Towards the end of the Autumn term, the students were given an open book mock examination followed by an opportunity to mark their work with direction from an academic. This approach continued in the Spring term, and culminated in a full two-hour mock examination at the end of the Spring term which was marked and returned with feedback before the end of the term.


As suspected, the level of progression at first attempt was considerably lower than desired, with a high number of students failing the examined component. With resits, the number that failed to progress was much lower, and attrition rates for this cohort at Part Two substantially lower still. Forcing the students to gain a high baseline of knowledge and understanding in Part One piut them in a better position for Part Two, and the high pass rate at Part One resits showed the students must have developed some independent learning skills, as they did not have access to direct teaching between the period of the main exams and the resits.


The main issue now facing us is the high number of students failing to progress at first attempt. We believe this is due to a combination of poor attendance and engagement from the Part One students, along with a lack of understanding about developing independent study skills. Although we expect students to develop independence with their learning, it is clear that some do not understand what this means, or how to approach their studies. Once the students pass Part One they continue to do well at Parts Two and Three, but we need to address the issues with progression at Part One.

Follow up

In order to improve our pass rate at Part One, we plan to develop a more robust process to identify and support students who are failing to engage with the course. This will be through comprehensive attendance monitoring and follow up by personal tutors, along with clear communication about expectations and independence. Students will initially get guidance on what they should have covered during timetabled teaching sessions, along with suggested independent work. As the year progresses, this guidance will become less detailed in order to further promote independence.

Engaging Diverse Learning Communities in Partnership: A Case Study Involving Professional Practice Students in Re-designing an Assessment





Lucy Hart (student – trainee PWP)- 

Tamara Wiehe (staff – PWP Clinical Educator)-

Charlie Waller Institute, School of Psychology and Clinical Language Sciences


This case study re-designed an assessment for two Higher Education programmes where students train to become Psychological Wellbeing Practitioners (PWP) in the NHS. The use of remote methods engaged harder to reach students in the re-design of the assessment tool. The project promotes the effectiveness of partnership working across diverse learning communities, by placing student views at the centre of decision making. In line with one of the University’s principles of partnership (2018) – shared responsibility for the process and outcome – this blog has been created by a student involved in the focus group and the member of teaching staff leading the project.


  • Improve the design of an assessment across the University’s PWP training programmes.
  • Involve students throughout the re-design process, ensuring student voices and experiences are acknowledged.
  • Implement the new assessment design with the next cohorts.


It was proposed by students in modular feedback and staff in a quarterly meeting that the design of an assessment on the PWP training programmes could be improved. These programmes are grounded in evidence-based, self-reflective and collaborative practice. Therefore, it was appropriate to maintain this style of working throughout the process. This was achieved through the students reflecting on their experiences when generating ideas and reviewing the re-designed assessment.


Traditional methods of partnership were not suitable for our students due to the nature of the PWP training programmes. Their week consists of one teaching day running from 9:30-4:30, a study day and three days practising clinically as a trainee PWP in an NHS service. Location was another factor as many of our students commute to University and live closer to their workplace. The use of technology and remote working enabled us to overcome these barriers and work in partnership with our students.

The partnership process followed these three steps:








When generating ideas and reviewing the proposed assessment, we, the professional practice students, considered the following points:

  • Assessment design – consistency in using vignettes throughout the course meaning students will be familiar with this method of working. Word limit ensures concise responses.
  • Time frame – the release date of the essay in proportion to the examination date.
  • Feasibility – will there be enough study days to compensate for the change in design allowing trainees to plan their essays.
  • Academic support – opportunities within the academic timetable to provide additional supervision-style sessions later in the module to support students.
  • Learning materials – accessibility to resources on blackboard. Assigning study days to allow planning of essay.


  • It was agreed that the original ICT would be replaced with written coursework based on a vignette and implemented with our next cohorts.
  • The assessment aligned with the module learning outcomes and student experiences were considered in a meaningful way.
  • Harder to reach students were able to engage in the re-design of the assessment through effective communication methods.


Student perspective:

“Being the expert of our experiences, it was refreshing to have our voices and experiences heard. We hope the re-design supports future cohorts and reduces anxieties around managing both university and service-based training. The focus group was a success due to the clear agenda setting and feasibility of remote online working. It can be proposed that a larger focus group would have beneficial during the review stage to remove biases associated with a small sample size.”

Staff perspective:

“Student input allowed us to hear more about their experiences during the training and took a lot of pressure off of staff to always be the ones coming up with solutions. The outcomes have a far reaching impact beyond that of the students and staff on the programme in terms of engaging diverse learning communities in Higher Education and forming more connections between Universities and NHS services. Although inclusivity and diversity was considered throughout, more participants in the virtual focus group would improve this further. Students could also have more power over the creation of the assessment materials themselves. Both of these reflections will inform my professional practice going forwards.”