‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Rita Balestrini and Elisabeth Koenigshofer, School of Literature and Languages, r.balestrini@reading; e.koenigshofer@reading.ac.uk

Overview

Working in collaboration with two Final Year students, we designed two ‘flexible’, ‘minimalist’ rubric templates usable and adaptable across different languages and levels, to provide a basis for the creation of level specific, and potentially task specific, marking schemes where sub-dimensions can be added to the main dimensions. The two marking templates are being piloted this year in the DLC. The project will feature in this year’s TEF submission.

Objectives

Design, in partnership with two students, rubric templates for the evaluation and feedback of writing tasks and oral presentations in foreign languages which:

  • were adaptable across languages and levels of proficiency
  • provided a more inclusive and engaging form of feedback
  • responded to the analysis of student focus group discussions carried out for a previous TLDF-funded project

Context

As a follow-up to a teacher-learner collaborative appraisal of rubrics used in MLES, now DLC, we designed two marking templates in partnership with two Final Year students, who had participated in the focus groups from a previous project and were employed through Campus Jobs. ‘Acknowledgement of effort’, ‘encouragement’, ‘use of non-evaluative language’, ‘need for and, at the same time, distrust of, objective marking’ were recurrent themes that had emerged from the analysis of the focus group discussions and clearly appeared to cause anxiety for students.

Implementation

We organised a preliminary session to discuss these findings with the two student partners. We suggested some articles about ‘complexity theory’ as applied to second language learning, (Kramsch, 2012; Larsen-Freeman, 2012; 2015a; 2015b; 2017) with the aim of making our theoretical perspective explicit and transparent to them. A second meeting was devoted to planning collaboratively the structure of two marking schemes for writing and presentations. The two students worked independently to produce examples of standard descriptors which avoided the use of evaluative language and emphasised achievement rather than shortcomings. At a third meeting they presented and discussed their proposals with us. At the last meetings, we continued working to finalise the templates and the two visual learning charts they had suggested. Finally, the two students wrote a blog post to recount their experience of this collaborative work.

The two students appreciated our theoretical approach, felt that it was in tune with their own point of view and that it could support the enhancement of the assessment and marking process. They also found resources on their own, which they shared with us – including rubrics from other universities. They made valuable suggestions, gave us feedback on our ideas and helped us to find alternative terms when we were struggling to avoid the use of non-evaluative language for our descriptors. They also suggested making use of some visual elements in the marking and feedback schemes in order to increase immediateness and effectiveness.

Impact

The two marking templates are being piloted this year in the DLC. They were presented to colleagues over four sessions during which the ideas behind their design were explained and discussed. Further internal meetings are planned. These conversations, already begun with the previous TLDF-funded project on assessment and feedback, are contributing to the development of a shared discourse on assessment, which is informed by research and scholarship. The two templates have been designed in partnership with students to ensure accessibility and engagement with the assessment and feedback process. This is regarded as an outstanding practice in the ‘Assessment and feedback benchmarking tool’ produced by the National Union of Students and is likely to feature positively in this year’s TEF submission.

Reflections

Rubrics have become mainstream, especially within certain university subjects like Foreign Languages. They have been introduced to ensure accountability and transparency in marking practices, but they have also created new problems of their own by promoting a false sense of objectivity in marking and grading. The openness and unpredictability of complex performance in foreign languages and of the dynamic language learning process itself are not adequately reflected in the detailed descriptors of the marking and feedback schemes commonly used for the objective numerical evaluation of performance-based assessment in foreign languages. As emerged from the analysis of focus group discussions conducted in the department in 2017, the lack of understanding and engagement with the feedback provided by this type of rubrics can generate frustration in students. Working in partnership with them, rather than simply listening to their voices or seeing them as evaluators of their own experience, helped us to design minimalist and flexible marking templates, which make use of sensible and sensitive language, introduce visual elements to increase immediateness and effectiveness, leave a considerable amount of space for assessors to comment on different aspects of an individual performance and provide ‘feeding forward’ feedback. This type of ‘partnership’ can be challenging because it requires remaining open to unexpected outcomes. Whether it can bring about real change depends on how its outcomes are going to interact with the educational ecosystems in which it is embedded.

Follow up

The next stage of the project will involve colleagues in the DLC who will be using the two templates to contribute to the creation of a ‘bank’ of descriptors by sharing the ones they will develop to tailor the templates for specific stages of language development, language objectives, language tasks, or dimensions of student performance. We also intend to encourage colleagues teaching culture modules to consider using the basic structure of the templates to start designing marking schemes for the assessment of student performance in their modules.

Links

An account written by the two students partners involved in the project can be found here:

Working in partnership with our lecturers to redesign language marking schemes

The first stages of this ongoing project to enhance the process of assessing writing and speaking skills in the Department of Languages and Cultures (DLC, previously MLES) are described in the following blog entries:

National Union of Students 2017. The ‘Assessment and feedback benchmarking tool’ is available at:

http://tsep.org.uk/wp-content/uploads/2017/07/Assessment-and-feedback-benchmarking-tool.pdf

References

Bloxham, S. 2013. Building ‘standard’ frameworks. The role of guidance and feedback in supporting the achievement of learners. In S. Merry et al. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

Bloxham, S. and Boyd, P. 2007. Developing effective assessment in Higher Education. A practical guide. Maidenhead: McGraw-Hill International.

Bloxham, S., Boyd, P. and Orr, S. 2011. Mark my words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education 36 (6): 655-670.

Bloxham, S., den-Outer, B., Hudson J. and Price M. 2016. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment in Higher Education 41 (3): 466-481.

Brooks, V. 2012. Marking as judgement. Research Papers in Education. 27 (1): 63-80.

Gottlieb, D. and Moroye, C. M. 2016. The perceptive imperative: Connoisseurship and the temptation of rubrics. Journal of Curriculum and Pedagogy 13 (2): 104-120.

HEA 2012. A Marked Improvement. Transforming assessment in HE. York: The Higher Education Academy.

Healey, M., Flint, A. and Harrington K. 2014. Engagement through partnership: students as partners in learning and teaching in higher education. York: The Higher Education Academy.

Kramsch, C. 2012. Why is everyone so excited about complexity theory in applied linguistics? Mélanges 33: 9-24.

Larsen-Freeman, D. 2012. The emancipation of the language learner. Studies in Second Language Learning and Teaching. 2(3): 297-309.

Larsen-Freeman, D. 2015a. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Larsen-Freeman, L. 2015b. Complexity Theory. In VanPatten, B. and Williams, J. (eds.) 2015. Theories in Second Language Acquisition. An Introduction. New York: Routledge: 227-244.

Larsen-Freeman, D. 2017. Just learning. Language Teaching 50 (3): 425-437.

Merry, S., Price, M., Carless, D. and Taras, M. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

O’Donovan, B., Price, M. and Rust, C. 2004. Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education 9 (3): 325-335.

Price, M. 2005. Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education 30 (3): 215-230.

Sadler, D. R. 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment and evaluation in Higher Education 34 (2): 159-179.

Sadler, D. R. 2013. The futility of attempting to codify academic achievement standards. Higher Education 67 (3): 273-288.

Torrance, H. 2007. Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education 14 (3): 281-294.

VanPatten & J. Williams (Eds.) 2015. Theories in Second Language Acquisition, 2nd edition. Routledge: 227-244.

Yorke, M. 2011. Summative assessment dealing. Dealing with the ‘Measurement Fallacy’. Studies in Higher Education 36 (3): 251-273.

Capturing and Developing Students’ Assessment Literacy

Hilary Harris, Maria Danos, Natthapoj Vincent Trakulphadetkrai, Stephanie Sharp, Cathy Tissot, Anna Tsakalaki, Rowena Kasprowicz – Institute of Education

hilary.a.harris@reading.ac.uk

Overview

The Institute of Education’s (IoE) T&L Group on Assessment Literacy worked collaboratively with 300+ students to ascertain the clarity level of assessment criteria used in all programmes across the IoE.  The findings were used to develop a report containing key findings and recommendations, which were then shared with programme directors. The findings also fed into the development of a Glossary of Common Assessment Terms to help develop students’ assessment literacy. SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting.

Objectives

The aims of the activity were to:

  • Develop students’ assessment literacy, specifically in terms of their understanding of assessment criteria which are used in marking rubrics
  • Engage students in reviewing the clarity of assessment criteria and terms used in marking rubrics
  • Engage programme directors in reflecting on the construction of their marking rubrics
  • Develop an IoE-wide glossary of common assessment terms

Context

The IoE has set up T&L Groups to enhance different aspects of our teaching and learning practices as part of the peer review process. The T&L Group on Assessment Literacy has been meeting since 2017, and is made up of seven academics from a wide range of undergraduate and postgraduate programmes.

As marking rubrics are now used for all summative assessments at the IoE (and to some extent across the University), ensuring that students have a good understanding of the embedded assessment terms matters as the criteria inform students of what is expected of them for a particular assessment. Moreover, the marking rubrics can also be used by students to develop their draft work before submission.

Implementation

The Group asked 300+ students across all the IoE programmes to indicate the clarity level of their programme’s assessment criteria by circling any terms on the marking rubric that they were confused by. The Group collated the information and created a summary table for each programme, ranking assessment terms according to how often the terms were highlighted by the students.  Each group member then wrote a brief report for each programme with key findings and recommendations on alternative assessment terms that are clearer (e.g. to replace ‘recapitulation’ with ‘summary’; ‘perceptive’ with ‘insightful’, etc.). In some other cases where the use of specific terminology is essential (e.g. scholarship or ethics), the Group’s advice is for module convenors to spend some time within classroom to explain such terms to students and refer students to the assessment glossary for further support and examples. Both the Report and the Glossary were disseminated to programme directors and their teams, who were then able to use the evidence in the report to reflect on their programme’s assessment criteria and consider with their team any changes that they would like to make that would make the marking rubric more accessible and easier to understand by the students.

Impact

At the IoE, the work has already made an impact in that programme directors have reflected on their assessment criteria alongside their teams and have acted on the Group’s recommendations (e.g. replacing problematic terms in their marking rubrics with terms that are easier to understand by students.) The Glossary has been used by IoE programme directors and module convenors when introducing the assessment and their marking rubrics. The Glossary has also been uploaded onto Blackboard for students to consult independently. The feedback from students on the Glossary has also been very positive. For example, one student commented that “The definitions were useful and the examples provided were even more helpful for clarifying exactly what the terms mean. The glossary is laid out in a clear and easy to follow way for each key term”.

Beyond the IoE, impact is being generated. Specifically, SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting. The Group has been invited to give talks on its work at CQSD events and the School of Law’s T&L seminar. The Group is also currently working with academic colleagues at other universities (nationally and internationally) to replicate this Group’s work and generate impact beyond the UoR.

Reflections

The activity was very successful as:

  • The Group had a clear focus of what it wanted to achieve
  • The Group was given time to carry out its work
  • There was strong leadership of the team, with each member being allocated specific contributions to the project

The process of involving students in reviewing terms on marking rubrics has empowered them to treat the documents critically and start a conversation with their lecturers about the purpose of marking rubrics, as well as being involved in as partners in making the marking rubric work for them.

There were some challenges that needed to be overcome/ ideas for improving the project:

  • When presented to colleagues at the Staff Day, some members of staff expressed the view that ‘tricky’ terms should be retained as developing an understanding of these terms is part of the transition to HE study. This was recognised in our report which suggests that technical terms (e.g. methodology) could be retained provided that they are explained to students.

Follow up

The Group plans to spend the 2019/2020 academic year generating and capturing the impact of its work across and beyond the UoR.

Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola

OBJECTIVES

For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.

CONTEXT

I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.

IMPLEMENTATION

I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.

IMPACT

My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.

REFLECTIONS

I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.

FOLLOW UP

On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.

LINKS

• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3. https://www.uncw.edu/jet/articles/vol7_1/Wolf.pdf
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 https://www.tandfonline.com/doi/pdf/10.1080/09695941003 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ https://guides.turnitin.com/01_Manuals_and_Guides/Instru ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
/Rubrics_and_Grading_Forms
• Blackboard, ‘Grade with Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Grade_with_Rubrics
• Blackboard, ‘Import and Export Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Import_and_Export_Rubrics

An evaluation of online systems of peer assessment for group work

Cathy Hughes and Heike Bruton, Henley Business School
catherine.hughes@reading.ac.uk

Overview

Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.

Objectives

  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.

Context

There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.

Implementation

By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.

Findings

A brief overview of the findings is as follows:

iPeer

While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.

SPARKplus

SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.

WebPA

For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.

Reflections

Reviewing assessment and feedback in Part One: getting assessment and feedback right with large classes

Dr Natasha Barrett, School of Biological Sciences
n.e.barrett@reading.ac.uk
Year(s) of activity: 2010/11
Overview

Objectives

  • Review the quantity, type and timing of assessments carried out in compulsory modules taken by students in the School of Biological Sciences.
  • Recommend better practices for assessment and feedback.

Context

The massification and marketisation of Higher Education means that it is increasingly important that the University of Reading perform well in term of student satisfaction and academic results. The National Student Surveys between 2005 and 2011 and the Reading Student Survey of 2008 and the National Student Survey both indicated that assessment and feedback were areas in which the University of Reading and the School of Biological Sciences needed to improve.

Implementation

Managing transition to the MPharm Degree

Dr John Brazier, Chemistry, Food and Pharmacy
j.a.brazier@reading.ac.uk

Overview

Image of students smiling and learning The MPharm degree at the University of Reading has a diverse student cohort, in terms of both ethnicity and previous academic experience. During the most recent development of our programme, we have introduce a Part One assessment strategy that is focused on developing an independent learning approach.

Objectives

  • To use a formative assessment strategy to encourage independent learning.
  • To use timetabling to ease the transition to higher education.
  • To reduce students’ fixation on their grades, and encourage them to instead focus on feedback.

Context

It was clear from Part Two results that our students were not progressing from Part One with the necessary knowledge and skill set to succeed on the MPharm course. The ability to pass Part One modules while underperforming in exams was identified as a key issue. The reliance of the students on standard information provided during lectures, and the inability to study outside of this standard information was impacting on students’ final grades.

Implementation

When designing our programme, we introduced a requirement to not only pass each module at 40%, but also to pass each examination with a mark of at least 40%. It was felt that this would ensure that students in Part Two would be equipped with the basic knowledge to succeed, and allow them to concentrate on developing the higher level skills required for Parts Three and Four, rather than having to return to Part One material due to their lack of knowledge. The requirement to pass the examination with a mark of at least 40% was a challenge; therefore we developed a formative/diagnostic assessment strategy to support the students throughout the year. In order to ease the transition from further education to university level, we designed a timetable that initially required students to attend teaching sessions intensively for the first five weeks, but then reduced gradually over the following four weeks and terms. This would allow us to direct their learning during the first few weeks of term, and then allow time for them to develop their independence once familiar with university life. Diagnostic and formative assessment points were spaced throughout the two teaching terms, starting with in-class workshops and tutorials and online Blackboard tests. Towards the end of the Autumn term, the students were given an open book mock examination followed by an opportunity to mark their work with direction from an academic. This approach continued in the Spring term, and culminated in a full two-hour mock examination at the end of the Spring term which was marked and returned with feedback before the end of the term.

Impact

As suspected, the level of progression at first attempt was considerably lower than desired, with a high number of students failing the examined component. With resits, the number that failed to progress was much lower, and attrition rates for this cohort at Part Two substantially lower still. Forcing the students to gain a high baseline of knowledge and understanding in Part One piut them in a better position for Part Two, and the high pass rate at Part One resits showed the students must have developed some independent learning skills, as they did not have access to direct teaching between the period of the main exams and the resits.

Reflections

The main issue now facing us is the high number of students failing to progress at first attempt. We believe this is due to a combination of poor attendance and engagement from the Part One students, along with a lack of understanding about developing independent study skills. Although we expect students to develop independence with their learning, it is clear that some do not understand what this means, or how to approach their studies. Once the students pass Part One they continue to do well at Parts Two and Three, but we need to address the issues with progression at Part One.

Follow up

In order to improve our pass rate at Part One, we plan to develop a more robust process to identify and support students who are failing to engage with the course. This will be through comprehensive attendance monitoring and follow up by personal tutors, along with clear communication about expectations and independence. Students will initially get guidance on what they should have covered during timetabled teaching sessions, along with suggested independent work. As the year progresses, this guidance will become less detailed in order to further promote independence.

Engaging Diverse Learning Communities in Partnership: A Case Study Involving Professional Practice Students in Re-designing an Assessment

 

 

 

 

Lucy Hart (student – trainee PWP)- l.hart@student.reading.ac.uk 

Tamara Wiehe (staff – PWP Clinical Educator)- t.wiehe@reading.ac.uk

Charlie Waller Institute, School of Psychology and Clinical Language Sciences

Overview

This case study re-designed an assessment for two Higher Education programmes where students train to become Psychological Wellbeing Practitioners (PWP) in the NHS. The use of remote methods engaged harder to reach students in the re-design of the assessment tool. The project promotes the effectiveness of partnership working across diverse learning communities, by placing student views at the centre of decision making. In line with one of the University’s principles of partnership (2018) – shared responsibility for the process and outcome – this blog has been created by a student involved in the focus group and the member of teaching staff leading the project.

Objectives

  • Improve the design of an assessment across the University’s PWP training programmes.
  • Involve students throughout the re-design process, ensuring student voices and experiences are acknowledged.
  • Implement the new assessment design with the next cohorts.

Context

It was proposed by students in modular feedback and staff in a quarterly meeting that the design of an assessment on the PWP training programmes could be improved. These programmes are grounded in evidence-based, self-reflective and collaborative practice. Therefore, it was appropriate to maintain this style of working throughout the process. This was achieved through the students reflecting on their experiences when generating ideas and reviewing the re-designed assessment.

Implementation

Traditional methods of partnership were not suitable for our students due to the nature of the PWP training programmes. Their week consists of one teaching day running from 9:30-4:30, a study day and three days practising clinically as a trainee PWP in an NHS service. Location was another factor as many of our students commute to University and live closer to their workplace. The use of technology and remote working enabled us to overcome these barriers and work in partnership with our students.

The partnership process followed these three steps:

 

 

 

 

 

 

 

When generating ideas and reviewing the proposed assessment, we, the professional practice students, considered the following points:

  • Assessment design – consistency in using vignettes throughout the course meaning students will be familiar with this method of working. Word limit ensures concise responses.
  • Time frame – the release date of the essay in proportion to the examination date.
  • Feasibility – will there be enough study days to compensate for the change in design allowing trainees to plan their essays.
  • Academic support – opportunities within the academic timetable to provide additional supervision-style sessions later in the module to support students.
  • Learning materials – accessibility to resources on blackboard. Assigning study days to allow planning of essay.

Impact

  • It was agreed that the original ICT would be replaced with written coursework based on a vignette and implemented with our next cohorts.
  • The assessment aligned with the module learning outcomes and student experiences were considered in a meaningful way.
  • Harder to reach students were able to engage in the re-design of the assessment through effective communication methods.

Reflections

Student perspective:

“Being the expert of our experiences, it was refreshing to have our voices and experiences heard. We hope the re-design supports future cohorts and reduces anxieties around managing both university and service-based training. The focus group was a success due to the clear agenda setting and feasibility of remote online working. It can be proposed that a larger focus group would have beneficial during the review stage to remove biases associated with a small sample size.”

Staff perspective:

“Student input allowed us to hear more about their experiences during the training and took a lot of pressure off of staff to always be the ones coming up with solutions. The outcomes have a far reaching impact beyond that of the students and staff on the programme in terms of engaging diverse learning communities in Higher Education and forming more connections between Universities and NHS services. Although inclusivity and diversity was considered throughout, more participants in the virtual focus group would improve this further. Students could also have more power over the creation of the assessment materials themselves. Both of these reflections will inform my professional practice going forwards.”

Making full use of grademark in geography and environmental science – Professor Andrew Wade

 

Profile picture for Prof. Andrew Wade

Professor Andrew Wade is responsible for research in hydrology, focused on water pollution, and Undergraduate and Postgraduate Teaching, including Hydrological Processes

OBJECTIVES

Colleagues within the School of Archaeology, Geography and Environmental Sciences (SAGES) have been aware of the University’s broader ambition to move towards online submission, feedback and grading where possible. Many had already made the change from paper based to online practices and others felt that they would like the opportunity to explore new ways of providing marks and feedback to see if handling the process online led to a better experience for both staff and students.

CONTEXT

In Summer 2017 it was agreed that SAGES would become one of the Early Adopter Schools working with the EMA Programme. This meant that the e Submission, Feedback and Grading work stream within the Programme worked very closely with both academic and professional colleagues within the School from June 2017 onwards. This was in order to support all aspects of a change from offline to online marking and broader processes for all coursework except where there was a clear practical reason not to, for example, field note-books.
I had started marking online in 2016-2017 so was familiar with some aspects of marking tools and some of the broader processes.

IMPLEMENTATION

My Part 2 module, GV2HY Hydrological Processes, involves students producing a report containing two sections. Part A focuses on a series of short answers based on practical-class experiences and Part B requires students to write a short essay. I was keen to use all of the functionality of Grademark/Turnitin during the marking process so I spent time creating my own personalised QuickMark bank so that I could simply pull across commonly used feedback phrases and marks against each specific question. This function was particularly useful to use when marking Part A. I could pull across QuickMarks showing the mark and then, in the same comment, explain why the question received, for example, 2 out of a possible 4 marks. It was especially helpful that my School sent around a discipline specific set of QuickMarks created by a colleagues. We could then pull the whole set or just particular QuickMarks into our own personalised set if we wanted to. This reduced the time spend on personalising and meant that the quality of my own set was improved further.

I also wanted to explore the usefulness of rubric grids as one way to provide feedback on the essay content in Part B of the assignment. A discipline specific example rubric grid was created by the School and send around to colleagues as a starting point. We could then amend this rubric to fit our specific assessment or, more generally, our modules and programmes. The personalised rubrics were attached to assignments using a simple process led by administrative colleagues. When marking I would highlight the level of performance achieved by each student, against each criteria by simply highlighting the box in blue. This rubric grid was used alongside both QuickMarks and in text comments in the essay. More specific comments were given in the blank free text box to the right of the screen.

IMPACT

Unfortunately module evaluation questionnaires were distributed and completed before students received feedback on their assignments so the student reaction to online feedback using QuickMarks, in text comments, free text comments and rubrics was not captured.

In terms of the impact on the marker experience, after spending some initial time getting my personal Quickmarks library right and amending the rubric example to fit with my module, I found marking online easier and quicker than marking on paper.

In addition to this, I also found that the use of rubrics helped to ensure standardisation. I felt comfortable that my students were receiving similar amounts of feedback and that this feedback was consistent across the cohort and when returning to marking the coursework after a break. When moderating coursework, I tend to find more consistent marking when colleagues have used a rubric.
I also felt that students received more feedback than they usually might but am conscious of the risk that they that drown in the detail. I try to use the free text boxes to provide a useful overall summary to avoid overuse of QuickMarks.

I don’t worry now about carrying large amounts of paper around or securing the work when I take assignments home. I also don’t need to worry about whether the work I’m marking has been submitted after the deadline – under the new processes established in SAGES, Support Centre colleagues deduct marks for late submission.

I do tend to provide my cohorts with a short piece of generic feedback, including an indicator of how the group performed-showing the percentage of students who had attained a mark in each class. I could easily access this information from Grademark/Turnitin.

I’m also still able to work through the feedback received by my Personal Tutees. I arrange individual sessions with them, they access ‘My Grades’ on Blackboard during this meeting and we work through the feedback together.

One issue was that, because the setting were set up in a particular way, students could access their feedback as soon as we had finished writing it. This issue was identified quickly and the settings were changed.

REFLECTIONS

My use of online marking has been successful and straightforward but my experience has been helped very significantly by the availability of two screens in my office. These had already been provided by School but became absolutely essential. Although I largely mark in my office on campus, when I mark from home I set up two laptops next to each other to replicate having two screens. This set up allows me to be able to check the student’s work on one screen whilst keeping their coursework on the other.

One further area of note is that the process of actually creating a rubric prompted a degree of reflection over what we actually want to see from students against each criteria and at different levels. This was particularly true around the grade classification boundaries-what is the different between a high 2:2 and a low 2:1 in terms of each of the criteria we mark against and how can we describe these differences in the descriptor boxes in a rubric grid so that students can understand.

This process of trying to make full use of all of the functions within our marking tools has led to some reflection surrounding criteria, what we want to see and how we might describe this to students.

LINKS

For more information on the creation and use of rubrics within Grademark/Turnitin please see the Technology Enhanced Learning Blog pages here:
http://blogs.reading.ac.uk/tel/support-blackboard/blackboard-support- staff-assessment/blackboard-support-staff-turnitin/turnitin-rubrics/

Introducing online assessment in IFP modules – Dr Dawn Clarke

OBJECTIVES

Colleagues within the IFP wanted to improve the student assessment experience. In particular we wanted to make the end to end process quicker and easier and reduce printing costs for students. We also wanted to offer some consistency with undergraduate programmes. This was particularly important for those students who stay in Reading after their foundation year to undertake an undergraduate degree. We were also keen to discover if there would be any additional benefits or challenges which we had not anticipated.

CONTEXT

No IFP modules had adopted online submission, grading and feedback until Spring 2015. We were aware of a number of departments successfully running online assessment within the University and the broader move towards electronic management of assessment within the sector as a whole. We introduced online assessment for all written assignments, including work containing pictures and diagrams, onto the IFP module ‘Politics’ (PO0POL) and ‘Sociology’ (PO0SOC) in 2015.

IMPLEMENTATION

We made the decision very early in the process that we would use Turnitin Grademark within Blackboard Gradecenter. This was consistent with existing use in the Department of Politics.
We created a set of bespoke instructions for students to follow when submitting their work and when viewing their feedback. These instructions were based on those provided by the Technology Enhanced Learning Team but adjusted to fit our specific audience. These were distributed in hard copy and we spent some time in class reviewing the
process well before the first submission date.

Submission areas in Blackboard and standard feedback rubric sections were created by the Departmental Administrator who was already highly experienced.

IMPACT

Overall the end to end assessment process did become easier for students. They didn’t have to travel to campus to submit their assignments and they enjoyed instant access to Turnitin.
Turnitin itself became a very useful learning tool for pre degree foundation students. It not only provided initial feedback on their work but prompted a dialogue with the marker before work was finally submitted. For students right at the start of their university experience this was extremely useful.

It was equally useful to automate deadlines. Students very clearly understood the exact time of the deadline. The marker was external to this process allowing them to adopt a more neutral position. This was more transparent than manual systems and ensured a visibly consistent experience for all students.

In addition to this, because students did not have to print out their assignments, they became much more likely to include pictures and diagrams to illustrate their work. This often improved the quality of submission.

All students uploaded their essays without any additional help. A small number also wanted to upload their own PowerPoint presentations of their in class presentations at the same time which meant that we needed to work through the difficulty of uploading two files under one submission point.

Moving to online assessment presented a number of further challenges. In particular, we became aware that not all students were accessing their feedback. Arranging online access for external examiners in order to moderate the work presented a final challenge. We then worked to address both of these issues.

REFLECTIONS

It would be really helpful to explore the student experience in more depth. One way to do this would be to include a section specifically focused on feedback within IFP module evaluation forms.
In the future we would like to make use of the audio feedback tool within Gradecenter. This will maximise the experience of international
students and their chances of developing language skills.