Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola

OBJECTIVES

For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.

CONTEXT

I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.

IMPLEMENTATION

I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.

IMPACT

My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.

REFLECTIONS

I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.

FOLLOW UP

On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.

LINKS

• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3. https://www.uncw.edu/jet/articles/vol7_1/Wolf.pdf
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 https://www.tandfonline.com/doi/pdf/10.1080/09695941003 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ https://guides.turnitin.com/01_Manuals_and_Guides/Instru ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
/Rubrics_and_Grading_Forms
• Blackboard, ‘Grade with Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Grade_with_Rubrics
• Blackboard, ‘Import and Export Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Import_and_Export_Rubrics

Interdisciplinary teaching: Science in Culture

Professor Nick Battey, School of Biological Sciences
n.h.battey@reading.ac.uk

Overview

12402A module for Part Three students was created by a collaborative effort between the Department of English Literature, the Department of History, and the School of Biological Sciences (SBS), called Science in Culture. This module was well-received by students, who found value in obtaining the perspective of disciplines other than their own, and experiencing teaching and learning methods outside the norm of their previous study.

Objectives

  • Offer a truly disciplinary module allowing students from English Literature, History, and SBS to study alongside one another, learning through the diverse teaching methods of science and the humanities.
  • Develop in students a broader, critical understanding of the precepts of science.
  • Provide an integrated view of science (with emphasis on Biological Sciences) within culture.

Context

The development of this collaborative module grew out of an Arts and Humanities Research Council sponsored project which looked at the value of literary and historical study of biology to students of biological sciences. An element of this was a workshop, ‘Cultivating Common Ground’, which aimed to foster interdisciplinary discussion between biology and the humanities. One of the key findings of the scoping study was that it would be beneficial to develop at least one module that taught both biology and humanities students alongside one another in an interdisciplinary way.

Implementation

The module was developed over a number of years by staff from SBS, English and History. The module designers from the different disciplines were determined to ensure that what was developed was a truly interdisciplinary module, breaking down the perceived divide between the sciences and the humanities, and showing how the different approaches and bodies of knowledge bear on the same questions.

The module is taught over one term. Students receive lectures and partake in seminar discussions on a historical, literary, or scientific concept, and also conduct lab work on subjects related to those explored in the lectures. As an example of this, in lab work students will identify a mutated gene, and explore the use of mutations for understanding how genes work. This topic of mutation can then be explored in its literary and historic contexts. The difference that exists between the scientific, literary and historical approaches can then be explored as a cultural challenge. From the ‘Cultivating Common Ground’ workshop, consensus had emerged that interdisciplinary learning and teaching needed to be ‘narrow and deep’. As a result, the module focuses on a defined set of ‘problems’, rather than ‘grand themes’, allowing a deeper exploration thereof, and situation of this within the cultural dynamics and methods of science.

In order to ensure students experience different ways of learning, students were given a variety of tasks, ranging from interpreting poems or discussing the history of a scientific process, which they recorded in a learning journal, these being marked and receiving feedback from tutors each week. While the completion of this task over the course of the module was an aspect of the summative assessment, the weekly feedback provided regular formative feedback to students. A focus on formative assessment was recognised as being important by the scoping study, as students on such an interdisciplinary module would require greater opportunity to learn what was expected of them. Linking formative assessment to the summative assessment ensured that students would be motivated to engage and receive valuable feedback. Students taking the module as part of a History or English Literature degree, for whom the module was worth 20 credits, rather than 10, also wrote a summative essay.

Impact

The project was successful in delivering a truly interdisciplinary module, with collaboration between the School of Biological Studies, the Department of English and the Department of History. The module was well-received by students, who reported that they appreciated the value of getting different perspectives on their disciplines.

Reflections

The greatest challenge in creating this module was achieving interdisciplinarity, as the teaching and learning strategies best suited to the individual disciplines were not necessarily suited to the teaching of an interdisciplinary module. That the module was in development for a number of years reflects the difficulty that developing an interdisciplinary approach, and this was made increasingly difficult by the paucity of existing literature on the topic from which to draw suitable practices. As a result, there had to be a number of iterative developments in order to create a module that could be delivered in a way which best achieved its learning outcomes.

Interdisciplinarity also provided a challenge with regards marking of assessments. As each discipline has different expectations, it was necessary for marking to be a collaborative process, with compromise being reached between assessors.

While the provision of multiple opportunities for formative assessment and feedback had value, given that it helped introduce students to the other disciplines, and encouraged deep learning, the process was strenuous, for both students and staff.

As the module was interdisciplinary, this meant that students had to engage with topics and processes outside the norm of their previous academic study. As a result, despite their enjoyment and high attainment, students on the module did find it challenging.

Follow up

Following the successful running of the module during the 2014-15 academic year, the module has been offered again, with slight revisions. One of the revisions has been in assessment, with students producing a report at the end of the module, rather than creating a learning portfolio over the course of the module, thus somewhat reducing the workload of staff and students. A group presentation has also been introduced, providing a different type of assessment, and making interdisciplinary collaborative group work part of summative assessment.

Links

Reviewing assessment and feedback in Part One: getting assessment and feedback right with large classes

Dr Natasha Barrett, School of Biological Sciences
n.e.barrett@reading.ac.uk
Year(s) of activity: 2010/11
Overview

Objectives

  • Review the quantity, type and timing of assessments carried out in compulsory modules taken by students in the School of Biological Sciences.
  • Recommend better practices for assessment and feedback.

Context

The massification and marketisation of Higher Education means that it is increasingly important that the University of Reading perform well in term of student satisfaction and academic results. The National Student Surveys between 2005 and 2011 and the Reading Student Survey of 2008 and the National Student Survey both indicated that assessment and feedback were areas in which the University of Reading and the School of Biological Sciences needed to improve.

Implementation

An evaluation of online systems of peer assessment for group work

Cathy Hughes and Heike Bruton, Henley Business School
catherine.hughes@reading.ac.uk

Overview

Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.

Objectives

  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.

Context

There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.

Implementation

By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.

Findings

A brief overview of the findings is as follows:

iPeer

While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.

SPARKplus

SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.

WebPA

For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.

Reflections

Developing the use of the interactive whiteboard for initial teacher trainees (2011-12)

Catherine Foley, Institute of Education
c.m.foley@reading.ac.uk

Overview

With interactive whiteboards becoming a well-established feature of English primary schools classrooms over the last decade, it is vital that the primary Post-Graduate Certificate of Education (PGCE) programme taught at the University of Reading’s Institute of Education prepares it graduates to be confident and competent in using interactive whiteboard technology in the classroom, including making pedagogically sound, informed decisions about when, when not, and how the interactive whiteboard can enhance learning.

Objectives

    • Explore how trainees can be supported to use the interactive whiteboard in their teaching of mathematics.
    • Gain an informed view of the entry- and exit-level interactive whiteboard skills and understanding of trainees to inform future programme planning.
    • Ensure that the trainee voice is incorporated into developmental planning.
    • Make recommendations regarding embedding the use of interactive whiteboard technology into our wider initial teacher training provision.

Implementation

Initial data collection was conducted through a questionnaire, which was administered towards the end of the trainees’ first week on the programme. This questionnaire was used to gather data on skills and competencies with regards interactive whiteboard technology.

The results of the initial questionnaire revealed that trainees on the programme generally had little or no experience of using interactive whiteboard technology, and that confidence levels for using the interactive whiteboard for general teaching and learning, and specifically within mathematics lessons, were low. The questionnaire had also asked trainees to rank statements in order to indicate the most important to meet their needs. The most preferred statement was that trainees would like support for the skills of how to use an interactive whiteboard. Second was that the use of the interactive whiteboard for teaching and learning be modelled within sessions.

On the basis of the questionnaire results, the following action plan was discussed and agreed with the programme director:

  1. Modelling of interactive whiteboard use throughout taught mathematics sessions. Where interactive whiteboard use was modelled, the ‘stepping out’ technique, as described in Lunenberg et al., was used explicitly to focus trainee’ attention on how the interactive whiteboard has been used, and more importantly, why and to what effect.
  2. Optional workshops during free-time within Autumn and Spring Terms.  These were aimed to ensure a basic level of skills, tied in with the interactive functions most likely to have an impact.  These workshops were limited to 10 trainees, to allow greater access to the interactive whiteboard and less pressure on ‘getting it right’.  The skills addressed during these workshops were based on a combination of student requests, the experience of the project leader, and those outlined in Beauchamp and Parkinson.
  3. Provision for peer sharing of resources created on school experience later in the programme.  In workshops, trainees who had developed interactive whiteboard skills while on placement were invited to share their expertise with other trainees.
  4. Opportunities for peer modelling within starter activities.  Trainees were encouraged to use the interactive whiteboard where appropriate in the presentation of starter activities to their peers, which occurs on a rolling programme throughout the module.

At the end of the module a follow-up questionnaire was administered. This contained a mixture of identical questions to the initial questionnaire, to allow comparison with the results that were gained at the beginning of the programme, and items designed to evaluate the different forms of support that had been provided.

Reflections

Trainees had, by the conclusion of the module, improved their experience with the use of interactive whiteboards, their confidence in doing so, their preparedness to use interactive whiteboard technology for the teaching of mathematics, and increased the level of skill they possessed in writing, manipulating shapes or images, and inserting children’s work or photographs.

It was possible as a result of the project to make the following recommendations for the Institute of Education, which may be useful for related subjects across the University of Reading:

  1. If staff are expected to integrate modelling of appropriate use of interactive whiteboards into their practice, they will need both technical and peer support in order to develop their own confidence. This could be tackled through teaching and learning seminars, practical workshops, software provision and technician time, in much the same way as the project itself supported trainees.
  2. Some of the technical skills could be integrated into ICT modules, allowing subject modules to focus on the most effective pedagogy within their subject.
  3. Primary programmes could consider some kind of formative collaborative tasks to develop and review interactive whiteboard-based activities within subject areas.
  4. The interactive whiteboard provisions in schools could be audited in order to ensure that the Institute of Education’s software and hardware provision is appropriately matched to what trainees will encounter, and incorporated a request for supervising students to comment on their tutees’ interactive whiteboard use as a quality assurance check.
  5. Time support so that trainees reach a basic level of confidence with the use of interactive whiteboard technology before their first school placement.

Links and Resources

Mieke Lunenberg, Fred Korthagen, and Anja Swennen (2007): The teacher educator as role model.  Teaching and Teacher Education, 23 (5)
Gary Beauchamp and John Parkinson (2005): Beyond the ‘wow’ factor: developing interactivity with the interactive whiteboard.  School Science Review, 86 (316)

Managing transition to the MPharm Degree

Dr John Brazier, Chemistry, Food and Pharmacy
j.a.brazier@reading.ac.uk

Overview

Image of students smiling and learning The MPharm degree at the University of Reading has a diverse student cohort, in terms of both ethnicity and previous academic experience. During the most recent development of our programme, we have introduce a Part One assessment strategy that is focused on developing an independent learning approach.

Objectives

  • To use a formative assessment strategy to encourage independent learning.
  • To use timetabling to ease the transition to higher education.
  • To reduce students’ fixation on their grades, and encourage them to instead focus on feedback.

Context

It was clear from Part Two results that our students were not progressing from Part One with the necessary knowledge and skill set to succeed on the MPharm course. The ability to pass Part One modules while underperforming in exams was identified as a key issue. The reliance of the students on standard information provided during lectures, and the inability to study outside of this standard information was impacting on students’ final grades.

Implementation

When designing our programme, we introduced a requirement to not only pass each module at 40%, but also to pass each examination with a mark of at least 40%. It was felt that this would ensure that students in Part Two would be equipped with the basic knowledge to succeed, and allow them to concentrate on developing the higher level skills required for Parts Three and Four, rather than having to return to Part One material due to their lack of knowledge. The requirement to pass the examination with a mark of at least 40% was a challenge; therefore we developed a formative/diagnostic assessment strategy to support the students throughout the year. In order to ease the transition from further education to university level, we designed a timetable that initially required students to attend teaching sessions intensively for the first five weeks, but then reduced gradually over the following four weeks and terms. This would allow us to direct their learning during the first few weeks of term, and then allow time for them to develop their independence once familiar with university life. Diagnostic and formative assessment points were spaced throughout the two teaching terms, starting with in-class workshops and tutorials and online Blackboard tests. Towards the end of the Autumn term, the students were given an open book mock examination followed by an opportunity to mark their work with direction from an academic. This approach continued in the Spring term, and culminated in a full two-hour mock examination at the end of the Spring term which was marked and returned with feedback before the end of the term.

Impact

As suspected, the level of progression at first attempt was considerably lower than desired, with a high number of students failing the examined component. With resits, the number that failed to progress was much lower, and attrition rates for this cohort at Part Two substantially lower still. Forcing the students to gain a high baseline of knowledge and understanding in Part One piut them in a better position for Part Two, and the high pass rate at Part One resits showed the students must have developed some independent learning skills, as they did not have access to direct teaching between the period of the main exams and the resits.

Reflections

The main issue now facing us is the high number of students failing to progress at first attempt. We believe this is due to a combination of poor attendance and engagement from the Part One students, along with a lack of understanding about developing independent study skills. Although we expect students to develop independence with their learning, it is clear that some do not understand what this means, or how to approach their studies. Once the students pass Part One they continue to do well at Parts Two and Three, but we need to address the issues with progression at Part One.

Follow up

In order to improve our pass rate at Part One, we plan to develop a more robust process to identify and support students who are failing to engage with the course. This will be through comprehensive attendance monitoring and follow up by personal tutors, along with clear communication about expectations and independence. Students will initially get guidance on what they should have covered during timetabled teaching sessions, along with suggested independent work. As the year progresses, this guidance will become less detailed in order to further promote independence.

Engaging Diverse Learning Communities in Partnership: A Case Study Involving Professional Practice Students in Re-designing an Assessment

 

 

 

 

Lucy Hart (student – trainee PWP)- l.hart@student.reading.ac.uk 

Tamara Wiehe (staff – PWP Clinical Educator)- t.wiehe@reading.ac.uk

Charlie Waller Institute, School of Psychology and Clinical Language Sciences

Overview

This case study re-designed an assessment for two Higher Education programmes where students train to become Psychological Wellbeing Practitioners (PWP) in the NHS. The use of remote methods engaged harder to reach students in the re-design of the assessment tool. The project promotes the effectiveness of partnership working across diverse learning communities, by placing student views at the centre of decision making. In line with one of the University’s principles of partnership (2018) – shared responsibility for the process and outcome – this blog has been created by a student involved in the focus group and the member of teaching staff leading the project.

Objectives

  • Improve the design of an assessment across the University’s PWP training programmes.
  • Involve students throughout the re-design process, ensuring student voices and experiences are acknowledged.
  • Implement the new assessment design with the next cohorts.

Context

It was proposed by students in modular feedback and staff in a quarterly meeting that the design of an assessment on the PWP training programmes could be improved. These programmes are grounded in evidence-based, self-reflective and collaborative practice. Therefore, it was appropriate to maintain this style of working throughout the process. This was achieved through the students reflecting on their experiences when generating ideas and reviewing the re-designed assessment.

Implementation

Traditional methods of partnership were not suitable for our students due to the nature of the PWP training programmes. Their week consists of one teaching day running from 9:30-4:30, a study day and three days practising clinically as a trainee PWP in an NHS service. Location was another factor as many of our students commute to University and live closer to their workplace. The use of technology and remote working enabled us to overcome these barriers and work in partnership with our students.

The partnership process followed these three steps:

 

 

 

 

 

 

 

When generating ideas and reviewing the proposed assessment, we, the professional practice students, considered the following points:

  • Assessment design – consistency in using vignettes throughout the course meaning students will be familiar with this method of working. Word limit ensures concise responses.
  • Time frame – the release date of the essay in proportion to the examination date.
  • Feasibility – will there be enough study days to compensate for the change in design allowing trainees to plan their essays.
  • Academic support – opportunities within the academic timetable to provide additional supervision-style sessions later in the module to support students.
  • Learning materials – accessibility to resources on blackboard. Assigning study days to allow planning of essay.

Impact

  • It was agreed that the original ICT would be replaced with written coursework based on a vignette and implemented with our next cohorts.
  • The assessment aligned with the module learning outcomes and student experiences were considered in a meaningful way.
  • Harder to reach students were able to engage in the re-design of the assessment through effective communication methods.

Reflections

Student perspective:

“Being the expert of our experiences, it was refreshing to have our voices and experiences heard. We hope the re-design supports future cohorts and reduces anxieties around managing both university and service-based training. The focus group was a success due to the clear agenda setting and feasibility of remote online working. It can be proposed that a larger focus group would have beneficial during the review stage to remove biases associated with a small sample size.”

Staff perspective:

“Student input allowed us to hear more about their experiences during the training and took a lot of pressure off of staff to always be the ones coming up with solutions. The outcomes have a far reaching impact beyond that of the students and staff on the programme in terms of engaging diverse learning communities in Higher Education and forming more connections between Universities and NHS services. Although inclusivity and diversity was considered throughout, more participants in the virtual focus group would improve this further. Students could also have more power over the creation of the assessment materials themselves. Both of these reflections will inform my professional practice going forwards.”

Using personal capture to support students to learn practical theory outside of the laboratory

Dr Geraldine (Jay) Mulley – School of Biological Sciences  

Overview

I produced four screen casts to encourage students to better prepare for practical classes and to reinforce practical theory taught in class. Approximately 45% of the cohort watched at least some of the video content, mainly in the few days leading up to the practical assessment. The students appreciated the extra resources, and there was a noticeable improvement in module satisfaction scores.

Objectives

  • To provide consistency in delivery of teaching practical theory between groups led by different practical leaders
  • To provide students with engaging resources to use outside of the classroom, to use as preparation tools for practical classes and as revision aids for the Blackboard-­‐based practical assessment

Context

The Part 1 Bacteriology & Virology module includes 12 hours of practical classes designed to teach students key microbiological techniques and theory. I usually begin each practical with a short lecture-­‐style introduction to explain what they need to do and why.  The 3 hr classes are typically very busy, and I have observed that some students feel overwhelmed with “information overload” and find it hard to assimilate the theory, whilst learning the new techniques.  I have had to schedule multiple runs of practical classes to accommodate the large cohort and my colleagues now teach some of the repeat sessions. My aim was to create a series of videos to explain the theoretical background in more detail that students can access outside of the classroom. I hoped this would ensure consistency in what is taught to each group and give the students more time to focus on learning the techniques during the classes. I hoped that they would use the resources both to help prepare for the classes and as a revision aid for the practical assessment

Implementation

I initially tried to record 4 videos by simply recording myself talking through my original PowerPoint presentations that I use in the practical class introductions (i.e. 4 individual videos to cover each of the 4 practical classes). Having started to make the videos, I realised that it was very difficult for me to explain the theory in this format, which was quite surprising given this is how I had been delivering the information up until that point! I therefore adapted the PowerPoint presentations to make videos focusing on each of the experimental themes, talking through what the students will do in the lab week-­‐by-­‐week with an explanation of the theory at appropriate points. I recorded the video tutorials using the Mediasite “slideshow + audio” option and narrated free-­‐style as I would do in a lecture (no script).  When I made a mistake, I paused for a few seconds and then started the sentence again. After finishing the entire recording, I then used the editing feature to cut out the mistakes, which were easy to identify in the audio trace due to the long pauses. I was also able to move slides to the appropriate place if I had poorly timed the slide transitions. Editing each video took around 30 min to 1 hr. I found it relatively easy to record and edit the videos and I became much more efficient after I had recorded the first few videos.

I would have liked to have asked students and other staff to help in the design and production of the videos, but the timing of the Pilot was not conducive to being able to collaborate at the time.

Impact

Mediasite analytics show 45% of the students in the cohort viewed at least some of the resources, and 17% of the cohort viewed each video more than once. Students watched the three shorter videos (3 – 4 min) in their entirety, but the longest video (18 min) showed a drop-­‐off in the number of views after approx. 5 min (Figure 1), and so in future I will limit my videos to 5 min max.

Graph showing how students watched the video

Only a few students viewed videos prior to practical classes; almost all views were in the few days leading up to the practical assessment on Blackboard. This shows that students were using the videos as a revision aid rather than as a preparation tool. This is probably because I uploaded the videos midway through term and by this stage one of the three groups had already completed the 4 practical classes and so I did not want to disadvantage this group by promoting the videos as a preparation tool. It will be interesting if I can encourage students to use it for this purpose next academic year. My expectation was that time spent viewing would directly correlate with practical assessment grades, however there is not a clear linear correlation (Figure 2).

Graph showing use of videos and grades obtained

For some students attending the practical classes and reading the handbook is enough to achieve a good grade. However, students that spent time viewing the videos did get a higher average than those that did not view any (Figure 3), although this probably reflects overall engagement with all the available learning resources.  Responses to the student survey indicated that students felt the videos improved their understanding of the topic and supported them to revise what they had learnt in class at their own pace.

Graph showing video watching and grades obtained

Reflections

The biggest challenge I faced was trying to recruit other colleagues to the pilot during a very busy Autumn term and finding the time to design the videos myself. It would have been helpful to see some examples of how to use personal capture before I started but having participated in the Pilot, I now have more confidence. Once I had experimented with the Mediasite software, I found it quite easy to record the videos and publish to my Blackboard site (with guidance from the excellent support from the TEL team and Blackboard help web pages). I liked the editing tools, although I would very much like the ability to cut and paste different videos together.  The analytics are very useful and much better than the “track users” function in Blackboard. The analytics reinforced the suggestion that students are much more likely to finish watching short videos and I would advise making videos 5 min maximum, ideally 3 min, in length.    My experience of personal capture was incredibly positive, and I will certainly be making more resources for my students for all my modules.

Follow-up

Since making the recordings for the Pilot, I have teamed up with several colleagues in the School of Biological Sciences and will show them how to use Mediasite so that they can make resources for their modules over summer. I have also used the Mediasite software to record microscope training sessions and talks from open days.

Building bridges and smoothing edges

Patrick Finnegan – School of Economics, Politics & International Relations

Overview

My use of the personal capture scheme was intended to enhance our teaching methods within the department. My initial aims of building additional video capture material into the ongoing lecture series did not come through but I was able to use the capture package to engage my students more in the administration of a (then) overly complicated module.

Objectives

  • Initial plan centred on including personal capture on the Army Higher Education Pathway project – this was not possible due to software incompatibility with the Canvas platform used for the project
  • New objectives were based on a different module (The Study of Politics) and improving the student experience on that module
  • Improve the explanation of methods
  • Explain the supervisory choice system
  • Enhance lectures on complicated topics

Context

The module I focused on was Po2SOP (The Study of Politics) with 160 students. Personal capture was needed on this project as it allowed myself, as convenor of our largest module, to communicate with all of my students in a more engaging way. We needed a way to bring the topic to life and ensure that the students took on board the lessons we needed them to. I wanted to include real examples of the methods in action and to use the screen casts to explain certain decisions that would be too difficult to do via email.

Implementation

Unfortunately, the project began too late in the term to really affect the lectures on this module, which is co-taught between several staff members often using pre-existing slides. However, I was able to use it to engage in discussion with students to explain issues such as supervisor reallocation during the year and how our special event – the mini-conference – was to work. Rather than writing lengthy emails, I was able to quickly and visually explain to he students what was happening and to invite their responses, which some did. They did not engage with the capture material so to speak but my use of it did encourage discussion as to how they would like to see it used in future and how they would like to receive feedback on assessments in future if audio/visual options were available. The recordings made by myself and my colleague were mainly PowerPoint voice-overs or were direct to camera discussions. This allowed us to present the students with illustrations and ‘first hand’ information. These required significant editing to make sure they were suitable but the final product was satisfactory.

Impact

Beyond ‘ease of life’ effects this year, there was not a great deal of impact but this was expected given the start date (the largest number of views in a video was 86, but this was an admin explanation style video). However, planning for next year has already incorporated the different potential advantages provided by personal capture. For example, the same methods module will now incorporate tutorial videos made within the department and will maintain some supervisor ‘adverts’ to allow students to better choose which member of staff they will seek to work with in future. Within other modules, some staff members will be taking the opportunity to build in some flipped classroom style teaching and other time-heavy elements that were not previously available to them.

Reflections

Time needed to organise and direct co-pilots within a teaching-heavy department needed to be a lot greater than I originally planned. I was also not expecting to meet the levels of resistance that I did from some more established staff who were not interested in changing how they delivered the material they had prepared earlier. The major difference I would include going forward would be to focus on upcoming modules rather than pre-existing as incorporating the material when the module has already started was too difficult.

Follow-up

I have started to prepare some videos on material I know will be needed in the future, this is relatively straight forward to do and will mimic the general practice to date. The main evolution will be seen in responses to student need during class and how screen casts can be made on demand and with consistent quality.