‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Rita Balestrini and Elisabeth Koenigshofer, School of Literature and Languages, r.balestrini@reading; e.koenigshofer@reading.ac.uk

Overview

Working in collaboration with two Final Year students, we designed two ‘flexible’, ‘minimalist’ rubric templates usable and adaptable across different languages and levels, to provide a basis for the creation of level specific, and potentially task specific, marking schemes where sub-dimensions can be added to the main dimensions. The two marking templates are being piloted this year in the DLC. The project will feature in this year’s TEF submission.

Objectives

Design, in partnership with two students, rubric templates for the evaluation and feedback of writing tasks and oral presentations in foreign languages which:

  • were adaptable across languages and levels of proficiency
  • provided a more inclusive and engaging form of feedback
  • responded to the analysis of student focus group discussions carried out for a previous TLDF-funded project

Context

As a follow-up to a teacher-learner collaborative appraisal of rubrics used in MLES, now DLC, we designed two marking templates in partnership with two Final Year students, who had participated in the focus groups from a previous project and were employed through Campus Jobs. ‘Acknowledgement of effort’, ‘encouragement’, ‘use of non-evaluative language’, ‘need for and, at the same time, distrust of, objective marking’ were recurrent themes that had emerged from the analysis of the focus group discussions and clearly appeared to cause anxiety for students.

Implementation

We organised a preliminary session to discuss these findings with the two student partners. We suggested some articles about ‘complexity theory’ as applied to second language learning, (Kramsch, 2012; Larsen-Freeman, 2012; 2015a; 2015b; 2017) with the aim of making our theoretical perspective explicit and transparent to them. A second meeting was devoted to planning collaboratively the structure of two marking schemes for writing and presentations. The two students worked independently to produce examples of standard descriptors which avoided the use of evaluative language and emphasised achievement rather than shortcomings. At a third meeting they presented and discussed their proposals with us. At the last meetings, we continued working to finalise the templates and the two visual learning charts they had suggested. Finally, the two students wrote a blog post to recount their experience of this collaborative work.

The two students appreciated our theoretical approach, felt that it was in tune with their own point of view and that it could support the enhancement of the assessment and marking process. They also found resources on their own, which they shared with us – including rubrics from other universities. They made valuable suggestions, gave us feedback on our ideas and helped us to find alternative terms when we were struggling to avoid the use of non-evaluative language for our descriptors. They also suggested making use of some visual elements in the marking and feedback schemes in order to increase immediateness and effectiveness.

Impact

The two marking templates are being piloted this year in the DLC. They were presented to colleagues over four sessions during which the ideas behind their design were explained and discussed. Further internal meetings are planned. These conversations, already begun with the previous TLDF-funded project on assessment and feedback, are contributing to the development of a shared discourse on assessment, which is informed by research and scholarship. The two templates have been designed in partnership with students to ensure accessibility and engagement with the assessment and feedback process. This is regarded as an outstanding practice in the ‘Assessment and feedback benchmarking tool’ produced by the National Union of Students and is likely to feature positively in this year’s TEF submission.

Reflections

Rubrics have become mainstream, especially within certain university subjects like Foreign Languages. They have been introduced to ensure accountability and transparency in marking practices, but they have also created new problems of their own by promoting a false sense of objectivity in marking and grading. The openness and unpredictability of complex performance in foreign languages and of the dynamic language learning process itself are not adequately reflected in the detailed descriptors of the marking and feedback schemes commonly used for the objective numerical evaluation of performance-based assessment in foreign languages. As emerged from the analysis of focus group discussions conducted in the department in 2017, the lack of understanding and engagement with the feedback provided by this type of rubrics can generate frustration in students. Working in partnership with them, rather than simply listening to their voices or seeing them as evaluators of their own experience, helped us to design minimalist and flexible marking templates, which make use of sensible and sensitive language, introduce visual elements to increase immediateness and effectiveness, leave a considerable amount of space for assessors to comment on different aspects of an individual performance and provide ‘feeding forward’ feedback. This type of ‘partnership’ can be challenging because it requires remaining open to unexpected outcomes. Whether it can bring about real change depends on how its outcomes are going to interact with the educational ecosystems in which it is embedded.

Follow up

The next stage of the project will involve colleagues in the DLC who will be using the two templates to contribute to the creation of a ‘bank’ of descriptors by sharing the ones they will develop to tailor the templates for specific stages of language development, language objectives, language tasks, or dimensions of student performance. We also intend to encourage colleagues teaching culture modules to consider using the basic structure of the templates to start designing marking schemes for the assessment of student performance in their modules.

Links

An account written by the two students partners involved in the project can be found here:

Working in partnership with our lecturers to redesign language marking schemes

The first stages of this ongoing project to enhance the process of assessing writing and speaking skills in the Department of Languages and Cultures (DLC, previously MLES) are described in the following blog entries:

National Union of Students 2017. The ‘Assessment and feedback benchmarking tool’ is available at:

http://tsep.org.uk/wp-content/uploads/2017/07/Assessment-and-feedback-benchmarking-tool.pdf

References

Bloxham, S. 2013. Building ‘standard’ frameworks. The role of guidance and feedback in supporting the achievement of learners. In S. Merry et al. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

Bloxham, S. and Boyd, P. 2007. Developing effective assessment in Higher Education. A practical guide. Maidenhead: McGraw-Hill International.

Bloxham, S., Boyd, P. and Orr, S. 2011. Mark my words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education 36 (6): 655-670.

Bloxham, S., den-Outer, B., Hudson J. and Price M. 2016. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment in Higher Education 41 (3): 466-481.

Brooks, V. 2012. Marking as judgement. Research Papers in Education. 27 (1): 63-80.

Gottlieb, D. and Moroye, C. M. 2016. The perceptive imperative: Connoisseurship and the temptation of rubrics. Journal of Curriculum and Pedagogy 13 (2): 104-120.

HEA 2012. A Marked Improvement. Transforming assessment in HE. York: The Higher Education Academy.

Healey, M., Flint, A. and Harrington K. 2014. Engagement through partnership: students as partners in learning and teaching in higher education. York: The Higher Education Academy.

Kramsch, C. 2012. Why is everyone so excited about complexity theory in applied linguistics? Mélanges 33: 9-24.

Larsen-Freeman, D. 2012. The emancipation of the language learner. Studies in Second Language Learning and Teaching. 2(3): 297-309.

Larsen-Freeman, D. 2015a. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Larsen-Freeman, L. 2015b. Complexity Theory. In VanPatten, B. and Williams, J. (eds.) 2015. Theories in Second Language Acquisition. An Introduction. New York: Routledge: 227-244.

Larsen-Freeman, D. 2017. Just learning. Language Teaching 50 (3): 425-437.

Merry, S., Price, M., Carless, D. and Taras, M. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

O’Donovan, B., Price, M. and Rust, C. 2004. Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education 9 (3): 325-335.

Price, M. 2005. Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education 30 (3): 215-230.

Sadler, D. R. 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment and evaluation in Higher Education 34 (2): 159-179.

Sadler, D. R. 2013. The futility of attempting to codify academic achievement standards. Higher Education 67 (3): 273-288.

Torrance, H. 2007. Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education 14 (3): 281-294.

VanPatten & J. Williams (Eds.) 2015. Theories in Second Language Acquisition, 2nd edition. Routledge: 227-244.

Yorke, M. 2011. Summative assessment dealing. Dealing with the ‘Measurement Fallacy’. Studies in Higher Education 36 (3): 251-273.

Stories of Our Studies

Simon Floodgate, Institute of Education, s.floodgate@reading.ac.uk

Overview

A form of inter-active, reflective practice for students in which Playback Theatre (an improvisatory form) is used to ‘play back’ individual stories of students’ experiences regarding all aspects of their studies.  This process can support emotional literacy and well-being and promote professionalism in students at all levels of study.

Objectives

  • To develop students’ ability to both express and assert themselves in the world and to support them to be more successful within their studies. (TLDF Priority 2.2)
  • To support students to feel valued, gain greater awareness of their skills and articulate these to better address the challenges they face in the field of education and the workplace (TLDF Priority 2.3)

Context

To address concerns regarding student well-being and emotional literacy as highlighted both nationally, within the University and the IOE where workload and pressures have specifically impacted upon initial teacher training (ITT) students who are transitioning into teaching professionals.

Implementation

The pilot year, within the IOE, was focussed upon the training of a student performance group with a couple of performance-workshops undertaken with Secondary ITT students and IOE staff.  Both sessions were evaluated and the students involved as the performance team, were also asked to evaluate the benefits to them of engagement in the project.  The project enters a second year (2019-20), with further funding, to adapt the contact sessions.  This will lead to two different versions of Stories of Our Studies.  A full length, two-hour version will incorporate a full Playback Theatre performance of 1-1/2 hours duration in a more public setting.  A second shorter version will align the performance elements with discursive and written aspects focussed upon critical incident analysis (Lister and Crisp, 2007).  This will blend the elements for more captive audiences within module teaching sessions.

Impact

As a pilot project, Stories of Our Studies achieved its objectives.  A student team was trained to deliver the contact sessions alongside the project leader.  The project was presented to both PGCE Secondary ITT students and IOE staff, enabling feedback from different perspectives.  Staff were able to appreciate the potential impact upon student well-being.  The PGCE students were able to effectively reflect upon their learning, in particular focussing upon their school teaching placements. They were able to subjectively reflect upon how these experiences felt to them but also objectively appreciate what occurred, how their experiences were similar or different to others and to be able to consider themselves as professional teachers soon to embark upon their chosen profession. The TLDF priorities 2.2 and 2.3 were both met.

Reflections

The enthusiasm and willingness of the UG students who trained in the form was exceptional and their empathy and artistry were commented upon following both performance-workshops.  This was a major factor in the pilot’s success.  The structure of the session with the main performance aspect following some Morenian sociometry facilitated a relaxed and intimate atmosphere thus enabling audience members to openly share.  The use of the form – Playback Theatre – was vital to the success of the pilot.

Although participants gained a lot from their engagement in the session, there is a further need to develop the sustainability of the reflective process.  To this end the project will be developed into longer and shorter iterations (as mentioned above).  There remains some difficulty in encouraging students to attend extra-curricular sessions and, for many, to attend events in which drama/theatre are mentioned.  This is a difficulty in attracting both student-performers and audience members.  Word of mouth will help and, like a stone gathering moss, momentum will attract more interest and students to engage with it.

Follow up

See above. The project has entered a second year with further TL enhancement (mini) funding.  It is evolving with the incorporation of critical incident analysis and a further blending of the performance and written reflection elements.

We already have more performance-workshops booked in the diary for 2019-20 than for last year, including presentation at the University’s T&L conference in January 2020.

Contact has been made with the RUSU society, Open Minds, to investigate the potential of some performances to a larger student audience outside of timetabled teaching.

The performance-workshop, photographed last year, will be filmed to create a marketing online clip to promote the project.  Recruitment of new student-performer members has already begun.

 

 

 

 

 

 

 

 

Photo of Playback Theatre in action

Capturing and Developing Students’ Assessment Literacy

Hilary Harris, Maria Danos, Natthapoj Vincent Trakulphadetkrai, Stephanie Sharp, Cathy Tissot, Anna Tsakalaki, Rowena Kasprowicz – Institute of Education

hilary.a.harris@reading.ac.uk

Overview

The Institute of Education’s (IoE) T&L Group on Assessment Literacy worked collaboratively with 300+ students to ascertain the clarity level of assessment criteria used in all programmes across the IoE.  The findings were used to develop a report containing key findings and recommendations, which were then shared with programme directors. The findings also fed into the development of a Glossary of Common Assessment Terms to help develop students’ assessment literacy. SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting.

Objectives

The aims of the activity were to:

  • Develop students’ assessment literacy, specifically in terms of their understanding of assessment criteria which are used in marking rubrics
  • Engage students in reviewing the clarity of assessment criteria and terms used in marking rubrics
  • Engage programme directors in reflecting on the construction of their marking rubrics
  • Develop an IoE-wide glossary of common assessment terms

Context

The IoE has set up T&L Groups to enhance different aspects of our teaching and learning practices as part of the peer review process. The T&L Group on Assessment Literacy has been meeting since 2017, and is made up of seven academics from a wide range of undergraduate and postgraduate programmes.

As marking rubrics are now used for all summative assessments at the IoE (and to some extent across the University), ensuring that students have a good understanding of the embedded assessment terms matters as the criteria inform students of what is expected of them for a particular assessment. Moreover, the marking rubrics can also be used by students to develop their draft work before submission.

Implementation

The Group asked 300+ students across all the IoE programmes to indicate the clarity level of their programme’s assessment criteria by circling any terms on the marking rubric that they were confused by. The Group collated the information and created a summary table for each programme, ranking assessment terms according to how often the terms were highlighted by the students.  Each group member then wrote a brief report for each programme with key findings and recommendations on alternative assessment terms that are clearer (e.g. to replace ‘recapitulation’ with ‘summary’; ‘perceptive’ with ‘insightful’, etc.). In some other cases where the use of specific terminology is essential (e.g. scholarship or ethics), the Group’s advice is for module convenors to spend some time within classroom to explain such terms to students and refer students to the assessment glossary for further support and examples. Both the Report and the Glossary were disseminated to programme directors and their teams, who were then able to use the evidence in the report to reflect on their programme’s assessment criteria and consider with their team any changes that they would like to make that would make the marking rubric more accessible and easier to understand by the students.

Impact

At the IoE, the work has already made an impact in that programme directors have reflected on their assessment criteria alongside their teams and have acted on the Group’s recommendations (e.g. replacing problematic terms in their marking rubrics with terms that are easier to understand by students.) The Glossary has been used by IoE programme directors and module convenors when introducing the assessment and their marking rubrics. The Glossary has also been uploaded onto Blackboard for students to consult independently. The feedback from students on the Glossary has also been very positive. For example, one student commented that “The definitions were useful and the examples provided were even more helpful for clarifying exactly what the terms mean. The glossary is laid out in a clear and easy to follow way for each key term”.

Beyond the IoE, impact is being generated. Specifically, SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting. The Group has been invited to give talks on its work at CQSD events and the School of Law’s T&L seminar. The Group is also currently working with academic colleagues at other universities (nationally and internationally) to replicate this Group’s work and generate impact beyond the UoR.

Reflections

The activity was very successful as:

  • The Group had a clear focus of what it wanted to achieve
  • The Group was given time to carry out its work
  • There was strong leadership of the team, with each member being allocated specific contributions to the project

The process of involving students in reviewing terms on marking rubrics has empowered them to treat the documents critically and start a conversation with their lecturers about the purpose of marking rubrics, as well as being involved in as partners in making the marking rubric work for them.

There were some challenges that needed to be overcome/ ideas for improving the project:

  • When presented to colleagues at the Staff Day, some members of staff expressed the view that ‘tricky’ terms should be retained as developing an understanding of these terms is part of the transition to HE study. This was recognised in our report which suggests that technical terms (e.g. methodology) could be retained provided that they are explained to students.

Follow up

The Group plans to spend the 2019/2020 academic year generating and capturing the impact of its work across and beyond the UoR.

Reframing Identity 360

Kate Allen, Department of Art, k.allen@reading.ac.uk

Overview

An investigative artwork that explores identity using 360 cameras developed through practical, alumni led workshops and socially engaged art with current art students, school groups and the general public. Part of ArtLab Movement’ at Tate Exchange (TEx) 2019 at the Tate Modern on March and be archived on the ArtLab website.

Objectives

- Contribute to live art event/out-reach work experience led by Alumni at Tate Exchange 1-3 March 2019

- Explore identity capture with 360 cameras

- 360 cameras experimentation including designing, capturing, printing and editing.

- Create portraits with purpleSTARS, people with learning disabilities and children from Widening Participation schools in Reading.

Context

Reframing Identity explored self-portraits in shot in 360, developed as a response to Tania Bruguera’s Turbine Hall Commission concerning institutional power, borders and migration. Can 360 self-portraits raise awareness of how interconnected we are, when no person is ever behind the 360 camera, everyone is included.

Implementation

Alumni and Virtual Reality artist Kassie Headon researched ideas in response to Tania Bruguera installation at Tate Modern inspired by Bruguera’s ideas on inclusion, connecting to Kate Allen’s research with purpleSTARS a group of people with and with learning disabilities who aim to make museums more inclusive. Kassie demonstrated to students and purpleSTARS how to use the GoPro Fusion Camera and the app to edit 360 content. Activities to share the 360 self portrait concept with visitors were developed including drawing cylindrical self-portraits which they could then wear on their heads for a 360 selfie. Students facilitated the Reframing Identity 360 workshop as part of ArtLab Movement at TEx. Using 360 cameras was a new experience and concept for our students and most people visiting the TEx. The 360 self-portraits were exhibited via live video stream from the 360 cameras on an iPad displayed at the Tate and let participants explore the views, which they could manipulate and distort to create the desired effect. Participants 360 self-portraits were also printed or sent to the visitors phone.

Impact

The impact of Reframing Identity 360 created access and inclusion with new technologies for students and the public. Experiencing the live video stream frequently gave visitors an ‘Oh Wow’ moment. TEx gave an opportunity for research led teaching with Dr Allen purpleSTARS, Alumni Kassie Headon and current BA students to explore the concept of 360 self-portraits gain professional practice experience facilitating the workshops and technical skills working, with the 360 camera. The 360 cameras are now part of the digital equipment available to students with a core team of ArtLab students now familiar with their potential and how to use them.

Reflections

Working with new technologies in collaboration with Alumni, ArtLab students and purpleSTARS led to new perspectives on ideas of inclusion and self -portraiture. The experimental research occurred in response to work at the Tate and in collaboration with visitors to TEx. The project built capacity and awareness of new technology being introduced into the Art Dept learning through research and practical experiences the potential to create artworks and inclusive engagements.

Follow up

Kassie Headen continued to work with the 360 camera collaborating with widening participation schools during the ArtLab summer workshops 2019 exploring spaces and manipulating 2d versions of 3d space.

We are developing further research collaborations and research led teaching opportunities for ideas exploring inclusion in museums and immersive virtual reality artworks/experiences using Oculus Rift technology.

Links and References

We created a 360 recording of our Reframing Identity event at the Tate https://www.thinglink.com/mediacard/1158753748827242499?autoplay=0&autorotate=0&displaytitle=1&rel=1

ArtLab documents the workshop

https://readingartlab.com/2019/04/25/artlab-tate-exchange-visual-diary-2nd-and-3rd-march-2019/

purpleSTARS web documentation

https://purplestars.org.uk/2017/11/12/purplestars-at-tate-gallery-2018/

Tate Exchange webpage

https://www.tate.org.uk/whats-on/tate-modern/tate-exchange/workshop/reading-assembly-movement

A ‘Sherlock’ Approach to Physician Associate Learning: Using Workshops to Promote Critical Thought

Dr Sarah Greenwood, Lecturer, Physician Associate Programme, School of Chemistry, Food and Pharmacy, s.l.greenwood@reading.ac.uk

Physician Associate (PA) students are talented life-sciences postgraduates who must quickly develop critical thinking skills in relation to medicine. Our PA programme focuses on the core skill of applying bioscientific and medical theory to skills of history taking, clinical examination, investigation diagnosis and treatment in order to produce safe, competent practitioners within two years.

Our student numbers have doubled in the five years since the programme began, and so as we strive to accommodate higher numbers, we witness greater diversity in learning styles. We recognised the need to promote advanced critical thinking amongst all our students in creative ways.

Firstly, funding secured access for all our students to McGraw Hill’s ‘Connect Online’, (which included an anatomy and physiology e-book, histology slides, media files, assessment tests and a cadaver dissection) for students to work though system by system. This online package proved very popular with the students whereby the overall average grade over 18 assignments was 94.47%.  Students’ engagement could be regularly monitored by the lead lecturer and areas of difficulty were successfully addressed.

Secondly, funding enabled us to develop in-house ‘PA workshop investigation packs’ – which were used by groups of PA students in our clinical skills suite, and online. The packs were themed according to body systems, and consisted of series of work stations containing instructions and various learning materials. Our PA students worked together to tackle core practical and theoretical concepts, working out solutions together in a systematic manner – hence using a ‘Sherlock’ detective approach to their learning!  The funding covered the cost of all our workshop materials, in particular laminated displays/charts, questions and visual guides; these are particularly valued because they are reusable for future cohorts of PA students.

The learning processes aimed to mirror the role of the Physician Associate in practice. As such, the learning packs provided engaging, challenging and motivational learning to develop essential skills safely and effectively.

The effectiveness of the workshops became apparent early on – as evidenced by the number of students passing their formative practical examinations at first attempt (shown below).

Formative results without workshops                        Formative results with workshops

Graph showing improved results following workshop

In the summative end-of-year objective structured clinical examinations (OSCEs): 28% of our workshop students achieved > 80% in these practical exams, with 5 students achieving 90% or above  – this exceeded the previous cohort’s results where only 8% of students scored over 80% and none scored 90% or above. There was an overall improvement in mean performance from 66% to 70%.Graph showing improved results following workshop

 

The student evaluations were very positive; all students were able to articulate what they had gained from the experience:

Examination station was useful because I was able to practice examination skills in an -almost- clinical environment, with the help of teachers. Another station I found useful was the BNF station. It gave me an understanding of how to use the BNF in a given time frame, and find what I am looking for. The BNF station also helped me identify a lot of drugs for certain conditions, which I would not have known otherwise

“The upper and lower neurological examinations were very useful. This is because I found the overlap and structure similar and reinforce the other. I also found the breast examination very useful because I am less likely to get patient experience with this as a male student”.

“Listening to the heart murmurs station with questions on hypertension – allowed us to work through different case examples”

The lecturers and students all recognised the value of the workshops, and this fun, interactive and relaxed teaching approach has now been formally integrated into the curriculum. We are most grateful for the support of the University’s teaching and learning enhancement scheme which funded this intervention.

Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola

OBJECTIVES

For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.

CONTEXT

I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.

IMPLEMENTATION

I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.

IMPACT

My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.

REFLECTIONS

I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.

FOLLOW UP

On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.

LINKS

• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3. https://www.uncw.edu/jet/articles/vol7_1/Wolf.pdf
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 https://www.tandfonline.com/doi/pdf/10.1080/09695941003 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ https://guides.turnitin.com/01_Manuals_and_Guides/Instru ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
/Rubrics_and_Grading_Forms
• Blackboard, ‘Grade with Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Grade_with_Rubrics
• Blackboard, ‘Import and Export Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Import_and_Export_Rubrics

Interdisciplinary teaching: Science in Culture

Professor Nick Battey, School of Biological Sciences
n.h.battey@reading.ac.uk

Overview

12402A module for Part Three students was created by a collaborative effort between the Department of English Literature, the Department of History, and the School of Biological Sciences (SBS), called Science in Culture. This module was well-received by students, who found value in obtaining the perspective of disciplines other than their own, and experiencing teaching and learning methods outside the norm of their previous study.

Objectives

  • Offer a truly disciplinary module allowing students from English Literature, History, and SBS to study alongside one another, learning through the diverse teaching methods of science and the humanities.
  • Develop in students a broader, critical understanding of the precepts of science.
  • Provide an integrated view of science (with emphasis on Biological Sciences) within culture.

Context

The development of this collaborative module grew out of an Arts and Humanities Research Council sponsored project which looked at the value of literary and historical study of biology to students of biological sciences. An element of this was a workshop, ‘Cultivating Common Ground’, which aimed to foster interdisciplinary discussion between biology and the humanities. One of the key findings of the scoping study was that it would be beneficial to develop at least one module that taught both biology and humanities students alongside one another in an interdisciplinary way.

Implementation

The module was developed over a number of years by staff from SBS, English and History. The module designers from the different disciplines were determined to ensure that what was developed was a truly interdisciplinary module, breaking down the perceived divide between the sciences and the humanities, and showing how the different approaches and bodies of knowledge bear on the same questions.

The module is taught over one term. Students receive lectures and partake in seminar discussions on a historical, literary, or scientific concept, and also conduct lab work on subjects related to those explored in the lectures. As an example of this, in lab work students will identify a mutated gene, and explore the use of mutations for understanding how genes work. This topic of mutation can then be explored in its literary and historic contexts. The difference that exists between the scientific, literary and historical approaches can then be explored as a cultural challenge. From the ‘Cultivating Common Ground’ workshop, consensus had emerged that interdisciplinary learning and teaching needed to be ‘narrow and deep’. As a result, the module focuses on a defined set of ‘problems’, rather than ‘grand themes’, allowing a deeper exploration thereof, and situation of this within the cultural dynamics and methods of science.

In order to ensure students experience different ways of learning, students were given a variety of tasks, ranging from interpreting poems or discussing the history of a scientific process, which they recorded in a learning journal, these being marked and receiving feedback from tutors each week. While the completion of this task over the course of the module was an aspect of the summative assessment, the weekly feedback provided regular formative feedback to students. A focus on formative assessment was recognised as being important by the scoping study, as students on such an interdisciplinary module would require greater opportunity to learn what was expected of them. Linking formative assessment to the summative assessment ensured that students would be motivated to engage and receive valuable feedback. Students taking the module as part of a History or English Literature degree, for whom the module was worth 20 credits, rather than 10, also wrote a summative essay.

Impact

The project was successful in delivering a truly interdisciplinary module, with collaboration between the School of Biological Studies, the Department of English and the Department of History. The module was well-received by students, who reported that they appreciated the value of getting different perspectives on their disciplines.

Reflections

The greatest challenge in creating this module was achieving interdisciplinarity, as the teaching and learning strategies best suited to the individual disciplines were not necessarily suited to the teaching of an interdisciplinary module. That the module was in development for a number of years reflects the difficulty that developing an interdisciplinary approach, and this was made increasingly difficult by the paucity of existing literature on the topic from which to draw suitable practices. As a result, there had to be a number of iterative developments in order to create a module that could be delivered in a way which best achieved its learning outcomes.

Interdisciplinarity also provided a challenge with regards marking of assessments. As each discipline has different expectations, it was necessary for marking to be a collaborative process, with compromise being reached between assessors.

While the provision of multiple opportunities for formative assessment and feedback had value, given that it helped introduce students to the other disciplines, and encouraged deep learning, the process was strenuous, for both students and staff.

As the module was interdisciplinary, this meant that students had to engage with topics and processes outside the norm of their previous academic study. As a result, despite their enjoyment and high attainment, students on the module did find it challenging.

Follow up

Following the successful running of the module during the 2014-15 academic year, the module has been offered again, with slight revisions. One of the revisions has been in assessment, with students producing a report at the end of the module, rather than creating a learning portfolio over the course of the module, thus somewhat reducing the workload of staff and students. A group presentation has also been introduced, providing a different type of assessment, and making interdisciplinary collaborative group work part of summative assessment.

Links

Reviewing assessment and feedback in Part One: getting assessment and feedback right with large classes

Dr Natasha Barrett, School of Biological Sciences
n.e.barrett@reading.ac.uk
Year(s) of activity: 2010/11
Overview

Objectives

  • Review the quantity, type and timing of assessments carried out in compulsory modules taken by students in the School of Biological Sciences.
  • Recommend better practices for assessment and feedback.

Context

The massification and marketisation of Higher Education means that it is increasingly important that the University of Reading perform well in term of student satisfaction and academic results. The National Student Surveys between 2005 and 2011 and the Reading Student Survey of 2008 and the National Student Survey both indicated that assessment and feedback were areas in which the University of Reading and the School of Biological Sciences needed to improve.

Implementation