Stories of Our Studies

Simon Floodgate, Institute of Education, s.floodgate@reading.ac.uk

Overview

A form of inter-active, reflective practice for students in which Playback Theatre (an improvisatory form) is used to ‘play back’ individual stories of students’ experiences regarding all aspects of their studies.  This process can support emotional literacy and well-being and promote professionalism in students at all levels of study.

Objectives

  • To develop students’ ability to both express and assert themselves in the world and to support them to be more successful within their studies. (TLDF Priority 2.2)
  • To support students to feel valued, gain greater awareness of their skills and articulate these to better address the challenges they face in the field of education and the workplace (TLDF Priority 2.3)

Context

To address concerns regarding student well-being and emotional literacy as highlighted both nationally, within the University and the IOE where workload and pressures have specifically impacted upon initial teacher training (ITT) students who are transitioning into teaching professionals.

Implementation

The pilot year, within the IOE, was focussed upon the training of a student performance group with a couple of performance-workshops undertaken with Secondary ITT students and IOE staff.  Both sessions were evaluated and the students involved as the performance team, were also asked to evaluate the benefits to them of engagement in the project.  The project enters a second year (2019-20), with further funding, to adapt the contact sessions.  This will lead to two different versions of Stories of Our Studies.  A full length, two-hour version will incorporate a full Playback Theatre performance of 1-1/2 hours duration in a more public setting.  A second shorter version will align the performance elements with discursive and written aspects focussed upon critical incident analysis (Lister and Crisp, 2007).  This will blend the elements for more captive audiences within module teaching sessions.

Impact

As a pilot project, Stories of Our Studies achieved its objectives.  A student team was trained to deliver the contact sessions alongside the project leader.  The project was presented to both PGCE Secondary ITT students and IOE staff, enabling feedback from different perspectives.  Staff were able to appreciate the potential impact upon student well-being.  The PGCE students were able to effectively reflect upon their learning, in particular focussing upon their school teaching placements. They were able to subjectively reflect upon how these experiences felt to them but also objectively appreciate what occurred, how their experiences were similar or different to others and to be able to consider themselves as professional teachers soon to embark upon their chosen profession. The TLDF priorities 2.2 and 2.3 were both met.

Reflections

The enthusiasm and willingness of the UG students who trained in the form was exceptional and their empathy and artistry were commented upon following both performance-workshops.  This was a major factor in the pilot’s success.  The structure of the session with the main performance aspect following some Morenian sociometry facilitated a relaxed and intimate atmosphere thus enabling audience members to openly share.  The use of the form – Playback Theatre – was vital to the success of the pilot.

Although participants gained a lot from their engagement in the session, there is a further need to develop the sustainability of the reflective process.  To this end the project will be developed into longer and shorter iterations (as mentioned above).  There remains some difficulty in encouraging students to attend extra-curricular sessions and, for many, to attend events in which drama/theatre are mentioned.  This is a difficulty in attracting both student-performers and audience members.  Word of mouth will help and, like a stone gathering moss, momentum will attract more interest and students to engage with it.

Follow up

See above. The project has entered a second year with further TL enhancement (mini) funding.  It is evolving with the incorporation of critical incident analysis and a further blending of the performance and written reflection elements.

We already have more performance-workshops booked in the diary for 2019-20 than for last year, including presentation at the University’s T&L conference in January 2020.

Contact has been made with the RUSU society, Open Minds, to investigate the potential of some performances to a larger student audience outside of timetabled teaching.

The performance-workshop, photographed last year, will be filmed to create a marketing online clip to promote the project.  Recruitment of new student-performer members has already begun.

 

 

 

 

 

 

 

 

Photo of Playback Theatre in action

Capturing and Developing Students’ Assessment Literacy

Hilary Harris, Maria Danos, Natthapoj Vincent Trakulphadetkrai, Stephanie Sharp, Cathy Tissot, Anna Tsakalaki, Rowena Kasprowicz – Institute of Education

hilary.a.harris@reading.ac.uk

Overview

The Institute of Education’s (IoE) T&L Group on Assessment Literacy worked collaboratively with 300+ students to ascertain the clarity level of assessment criteria used in all programmes across the IoE.  The findings were used to develop a report containing key findings and recommendations, which were then shared with programme directors. The findings also fed into the development of a Glossary of Common Assessment Terms to help develop students’ assessment literacy. SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting.

Objectives

The aims of the activity were to:

  • Develop students’ assessment literacy, specifically in terms of their understanding of assessment criteria which are used in marking rubrics
  • Engage students in reviewing the clarity of assessment criteria and terms used in marking rubrics
  • Engage programme directors in reflecting on the construction of their marking rubrics
  • Develop an IoE-wide glossary of common assessment terms

Context

The IoE has set up T&L Groups to enhance different aspects of our teaching and learning practices as part of the peer review process. The T&L Group on Assessment Literacy has been meeting since 2017, and is made up of seven academics from a wide range of undergraduate and postgraduate programmes.

As marking rubrics are now used for all summative assessments at the IoE (and to some extent across the University), ensuring that students have a good understanding of the embedded assessment terms matters as the criteria inform students of what is expected of them for a particular assessment. Moreover, the marking rubrics can also be used by students to develop their draft work before submission.

Implementation

The Group asked 300+ students across all the IoE programmes to indicate the clarity level of their programme’s assessment criteria by circling any terms on the marking rubric that they were confused by. The Group collated the information and created a summary table for each programme, ranking assessment terms according to how often the terms were highlighted by the students.  Each group member then wrote a brief report for each programme with key findings and recommendations on alternative assessment terms that are clearer (e.g. to replace ‘recapitulation’ with ‘summary’; ‘perceptive’ with ‘insightful’, etc.). In some other cases where the use of specific terminology is essential (e.g. scholarship or ethics), the Group’s advice is for module convenors to spend some time within classroom to explain such terms to students and refer students to the assessment glossary for further support and examples. Both the Report and the Glossary were disseminated to programme directors and their teams, who were then able to use the evidence in the report to reflect on their programme’s assessment criteria and consider with their team any changes that they would like to make that would make the marking rubric more accessible and easier to understand by the students.

Impact

At the IoE, the work has already made an impact in that programme directors have reflected on their assessment criteria alongside their teams and have acted on the Group’s recommendations (e.g. replacing problematic terms in their marking rubrics with terms that are easier to understand by students.) The Glossary has been used by IoE programme directors and module convenors when introducing the assessment and their marking rubrics. The Glossary has also been uploaded onto Blackboard for students to consult independently. The feedback from students on the Glossary has also been very positive. For example, one student commented that “The definitions were useful and the examples provided were even more helpful for clarifying exactly what the terms mean. The glossary is laid out in a clear and easy to follow way for each key term”.

Beyond the IoE, impact is being generated. Specifically, SDTLs and DDTLs of almost all UoR Schools and the Academic Director of the UoR Malaysia campus have now either had one-to-one meetings with us or contacted us to explore how our group’s work could be adopted and adapted in their own setting. The Group has been invited to give talks on its work at CQSD events and the School of Law’s T&L seminar. The Group is also currently working with academic colleagues at other universities (nationally and internationally) to replicate this Group’s work and generate impact beyond the UoR.

Reflections

The activity was very successful as:

  • The Group had a clear focus of what it wanted to achieve
  • The Group was given time to carry out its work
  • There was strong leadership of the team, with each member being allocated specific contributions to the project

The process of involving students in reviewing terms on marking rubrics has empowered them to treat the documents critically and start a conversation with their lecturers about the purpose of marking rubrics, as well as being involved in as partners in making the marking rubric work for them.

There were some challenges that needed to be overcome/ ideas for improving the project:

  • When presented to colleagues at the Staff Day, some members of staff expressed the view that ‘tricky’ terms should be retained as developing an understanding of these terms is part of the transition to HE study. This was recognised in our report which suggests that technical terms (e.g. methodology) could be retained provided that they are explained to students.

Follow up

The Group plans to spend the 2019/2020 academic year generating and capturing the impact of its work across and beyond the UoR.

Reframing Identity 360

Kate Allen, Department of Art, k.allen@reading.ac.uk

Overview

An investigative artwork that explores identity using 360 cameras developed through practical, alumni led workshops and socially engaged art with current art students, school groups and the general public. Part of ArtLab Movement’ at Tate Exchange (TEx) 2019 at the Tate Modern on March and be archived on the ArtLab website.

Objectives

- Contribute to live art event/out-reach work experience led by Alumni at Tate Exchange 1-3 March 2019

- Explore identity capture with 360 cameras

- 360 cameras experimentation including designing, capturing, printing and editing.

- Create portraits with purpleSTARS, people with learning disabilities and children from Widening Participation schools in Reading.

Context

Reframing Identity explored self-portraits in shot in 360, developed as a response to Tania Bruguera’s Turbine Hall Commission concerning institutional power, borders and migration. Can 360 self-portraits raise awareness of how interconnected we are, when no person is ever behind the 360 camera, everyone is included.

Implementation

Alumni and Virtual Reality artist Kassie Headon researched ideas in response to Tania Bruguera installation at Tate Modern inspired by Bruguera’s ideas on inclusion, connecting to Kate Allen’s research with purpleSTARS a group of people with and with learning disabilities who aim to make museums more inclusive. Kassie demonstrated to students and purpleSTARS how to use the GoPro Fusion Camera and the app to edit 360 content. Activities to share the 360 self portrait concept with visitors were developed including drawing cylindrical self-portraits which they could then wear on their heads for a 360 selfie. Students facilitated the Reframing Identity 360 workshop as part of ArtLab Movement at TEx. Using 360 cameras was a new experience and concept for our students and most people visiting the TEx. The 360 self-portraits were exhibited via live video stream from the 360 cameras on an iPad displayed at the Tate and let participants explore the views, which they could manipulate and distort to create the desired effect. Participants 360 self-portraits were also printed or sent to the visitors phone.

Impact

The impact of Reframing Identity 360 created access and inclusion with new technologies for students and the public. Experiencing the live video stream frequently gave visitors an ‘Oh Wow’ moment. TEx gave an opportunity for research led teaching with Dr Allen purpleSTARS, Alumni Kassie Headon and current BA students to explore the concept of 360 self-portraits gain professional practice experience facilitating the workshops and technical skills working, with the 360 camera. The 360 cameras are now part of the digital equipment available to students with a core team of ArtLab students now familiar with their potential and how to use them.

Reflections

Working with new technologies in collaboration with Alumni, ArtLab students and purpleSTARS led to new perspectives on ideas of inclusion and self -portraiture. The experimental research occurred in response to work at the Tate and in collaboration with visitors to TEx. The project built capacity and awareness of new technology being introduced into the Art Dept learning through research and practical experiences the potential to create artworks and inclusive engagements.

Follow up

Kassie Headen continued to work with the 360 camera collaborating with widening participation schools during the ArtLab summer workshops 2019 exploring spaces and manipulating 2d versions of 3d space.

We are developing further research collaborations and research led teaching opportunities for ideas exploring inclusion in museums and immersive virtual reality artworks/experiences using Oculus Rift technology.

Links and References

We created a 360 recording of our Reframing Identity event at the Tate https://www.thinglink.com/mediacard/1158753748827242499?autoplay=0&autorotate=0&displaytitle=1&rel=1

ArtLab documents the workshop

https://readingartlab.com/2019/04/25/artlab-tate-exchange-visual-diary-2nd-and-3rd-march-2019/

purpleSTARS web documentation

https://purplestars.org.uk/2017/11/12/purplestars-at-tate-gallery-2018/

Tate Exchange webpage

https://www.tate.org.uk/whats-on/tate-modern/tate-exchange/workshop/reading-assembly-movement

A ‘Sherlock’ Approach to Physician Associate Learning: Using Workshops to Promote Critical Thought

Dr Sarah Greenwood, Lecturer, Physician Associate Programme, School of Chemistry, Food and Pharmacy, s.l.greenwood@reading.ac.uk

Physician Associate (PA) students are talented life-sciences postgraduates who must quickly develop critical thinking skills in relation to medicine. Our PA programme focuses on the core skill of applying bioscientific and medical theory to skills of history taking, clinical examination, investigation diagnosis and treatment in order to produce safe, competent practitioners within two years.

Our student numbers have doubled in the five years since the programme began, and so as we strive to accommodate higher numbers, we witness greater diversity in learning styles. We recognised the need to promote advanced critical thinking amongst all our students in creative ways.

Firstly, funding secured access for all our students to McGraw Hill’s ‘Connect Online’, (which included an anatomy and physiology e-book, histology slides, media files, assessment tests and a cadaver dissection) for students to work though system by system. This online package proved very popular with the students whereby the overall average grade over 18 assignments was 94.47%.  Students’ engagement could be regularly monitored by the lead lecturer and areas of difficulty were successfully addressed.

Secondly, funding enabled us to develop in-house ‘PA workshop investigation packs’ – which were used by groups of PA students in our clinical skills suite, and online. The packs were themed according to body systems, and consisted of series of work stations containing instructions and various learning materials. Our PA students worked together to tackle core practical and theoretical concepts, working out solutions together in a systematic manner – hence using a ‘Sherlock’ detective approach to their learning!  The funding covered the cost of all our workshop materials, in particular laminated displays/charts, questions and visual guides; these are particularly valued because they are reusable for future cohorts of PA students.

The learning processes aimed to mirror the role of the Physician Associate in practice. As such, the learning packs provided engaging, challenging and motivational learning to develop essential skills safely and effectively.

The effectiveness of the workshops became apparent early on – as evidenced by the number of students passing their formative practical examinations at first attempt (shown below).

Formative results without workshops                        Formative results with workshops

Graph showing improved results following workshop

In the summative end-of-year objective structured clinical examinations (OSCEs): 28% of our workshop students achieved > 80% in these practical exams, with 5 students achieving 90% or above  – this exceeded the previous cohort’s results where only 8% of students scored over 80% and none scored 90% or above. There was an overall improvement in mean performance from 66% to 70%.Graph showing improved results following workshop

 

The student evaluations were very positive; all students were able to articulate what they had gained from the experience:

Examination station was useful because I was able to practice examination skills in an -almost- clinical environment, with the help of teachers. Another station I found useful was the BNF station. It gave me an understanding of how to use the BNF in a given time frame, and find what I am looking for. The BNF station also helped me identify a lot of drugs for certain conditions, which I would not have known otherwise

“The upper and lower neurological examinations were very useful. This is because I found the overlap and structure similar and reinforce the other. I also found the breast examination very useful because I am less likely to get patient experience with this as a male student”.

“Listening to the heart murmurs station with questions on hypertension – allowed us to work through different case examples”

The lecturers and students all recognised the value of the workshops, and this fun, interactive and relaxed teaching approach has now been formally integrated into the curriculum. We are most grateful for the support of the University’s teaching and learning enhancement scheme which funded this intervention.

Improving assessment writing and grading skills through the use of a rubric – Dr Bolanle Adebola

Dr Bolanle Adebola is the Module Convenor and lecturer for the following modules on the LLM Programme (On campus and distance learning):

International Commercial Arbitration, Corporate Governance, and Corporate Finance. She is also a Lecturer for the LLB Research Placement Project.

Bolanle is also the Legal Practice Liaison Officer for the CCLFR.

A profile photo of Dr Adebola

OBJECTIVES

For students:

• To make the assessment criteria more transparent and understandable.
• To improve assessment output and essay writing skills generally.

For the teacher:

• To facilitate assessment grading by setting clearly defined criteria.
• To facilitate the feedback process by creating a framework for dialogue which is understood both by the teacher and the student.

CONTEXT

I faced a number of challenges in relation to the assessment process in my first year as a lecturer:

• My students had not performed as well as I would have liked them to in their assessments.

• It was my first time of having to justify the grades I had awarded and I found that I struggled to articulate clearly and consistently the reasons for some of the grades I had awarded.

• I had been newly introduced to the step-marking framework for distinction grades as well as the requirement to make full use of the grading scale which I found challenging in view of the quality of some of the essays I had graded.

I spoke to several colleagues but came to understand that there were as many approaches as there were people. I also discussed the assessment process with several of my students and came to understand that many were both unsure and unclear about the criteria by which their assessments were graded across their modules.
I concluded that I needed to build a bridge between my approach to assessment grading and my students’ understanding of the assessment criteria. Ideally, the chosen method would facilitate consistency and the provision of feedback on my part, and improve the quality of essays on my students’ part.

IMPLEMENTATION

I tend towards the constructivist approach to learning which means that I structure my activities towards promoting student-led learning. For summative assessments, my students are required to demonstrate their understanding and ability to critically appraise legal concepts that I have chosen from our sessions in class. Hence, the main output for all summative assessments on my modules is an essay. Wolf and Stevens (2007) assert that learning is best achieved where all the participants in the process are clear about the criteria for the performance and the levels at which it will be assessed. My goal therefore became to ensure that my students understood the elements I looked for in their essays; these being the criteria against which I graded the essays. They also had to understand how I decided the standards that their essays reflected. While the student handbook sets out the various standards that we apply in the University, I wanted to provide clearer direction on how they could meet or how I determine that an essay meets any of those standards.

If the students were to understand the criteria I apply when grading their essays, then I would have to articulate them. Articulating the criteria for a well-written essay would benefit both myself and my students. For my students, in addition to a clearer understanding of the assessment criteria, it would enable them to self-evaluate which would improve the quality of their output. Improved quality would lead to improved grades and I could give effect to university policy. Articulating the criteria would benefit me because it would facilitate consistency. It would also enable me to give detailed and helpful feedback to students on the strengths and weaknesses of the essays being graded, as well as on their essay writing skills in general; with advice on how to improve different facets of their outputs going forward. Ultimately, my students would learn valuable skills which they could apply across board and after they graduate.
For assessments which require some form of performance, essays being an example, a rubric is an excellent evaluation tool because it fulfils all the requirements I have expressed above. (Brookhart, 2013). Hence, I decided to present my grading criteria and standards in the form of a rubric.

The rubric is divided into 5 criteria which are set out in 5 rows:

  • Structure
  • Clarity
  • Research
  • Argument
  • Scholarship.

For each criterion, there are 4 performance levels which are set out in columns: Poor, Good, Merit and Excellent. An essay will be mapped along each row and column. The final marks will depend on how the student has performed on each criterion, as well as my perception of the output as a whole.

Studies suggest that a rubric is most effective when produced in collaboration with the students. (Andrade, Du and Mycek, 2010). When I created my rubric, I did not involve my students, however. I thought that would not be necessary given that my rubric was to be applied generally and with changing cohorts of students. Notwithstanding, I wanted students to engage with it. So, the document containing the rubric has an introduction addressed to the students, which explains the context in which the rubric has beencreated. It also explains how the rubric is applied and the relationship between the criteria. It states for example, that ‘even where the essay has good arguments, poor structure may undermine its score’. It explains that the final grade combines but objective assessment and a subjective evaluation of the output as a whole which is based on the marker’s discretion.

To ensure that students are not confused about the standards set out in the rubric and the assessment standards set out in the students’ handbook, the performance levels set out in the rubric are mapped against the assessment standards set out in the student handbook. The document containing the rubric also contains links to the relevant handbook. Finally, the rubric gives the students an example of how it would be applied to an assessment. Thereafter, it sets out the manner in which feedback would be presented to the students. That helps me create a structure in which feedback would be provided and which both the students and I would understand clearly.

IMPACT

My students’ assessment outputs have been of much better quality and so have achieved better grades since I introduced the rubric. In one of my modules, the average grade, as recorded in the module convenor’s report to the external examiner (MC’s Report), 2015/16, was 64.3%. 20% of the class attained distinctions, all in the 70-79 range. That year, I struggled to give feedback and was asked to provide additional feedback comments to a few students. In 2016/17, after I introduced the rubric, there was a slight dip in the average mark to 63.7%. The dip was because of a fail mark amongst the cohort. If that fail mark is controlled for, then the average percentage had crept up from 2015/16. There was a clear increase in the percentage of distinctions, which had gone up to
25.8% from 20% in the previous year. The cross-over had been

from the students who had been in the merit range. Clearly, some students had been able to use the rubric to improve the standards of their essays. I found the provision of feedback much easier in 2016/17 because I had clear direction from the rubric. When giving feedback I explained both the strengths and weaknesses of the essay in relation to each criterion. My hope was that they would apply the advice more generally across other modules as the method of assessment is the same across board. In 2017/18, the average mark for the same module went up to 68.84%. 38% of the class attained distinctions; with 3% attaining more than 80%. Hence, in my third year, I have also been able to utilise step-marking in the distinction grade which has enabled me to meet the university’s policy.

When I introduced the rubric in 2016/17, I had a control module, by which I mean a module in which I neither provided the rubric nor spoke to the students about their assessments in detail. The quality of assessments from that module was much lower than the others where the students had been introduced to the rubric. In that year, the average grade for the control module was 60%; with 20% attaining a distinction and 20% failing. In 2017/18, while I did not provide the students with the rubric, I spoke to them about the assessments. The average grade for the control module was 61.2%; with 23% attaining a distinction. There was a reduction in the failure rate to 7.6%. The distinction grade also expanded, with 7.6% attaining a higher distinction grade. There was movement both from the failure grade and the pass grade to the next standard/performance level. Though I did not provide the students with the rubric, I still provided feedback to the students using the rubric as a guide. I have found that it has become ingrained in me and is a very useful tool for explaining the reasons for my grades to my students.

From my experience, I can assert, justifiably, that the rubric has played a very important role in improving the students’ essay outputs. It has also enabled me to improve my feedback skills immensely.

REFLECTIONS

I have observed that as the studies in the field argue, it is insufficient merely to have a rubric. For the rubric to achieve the desired objectives, it is important that students actively engage with it. I must admit, that I did not take a genuinely constructivist approach to the rubric. I wanted to explain myself to the students. I did not really encourage a 2-way conversation as the studies encourage and I think this affected the effectiveness of the rubric.

In 2017/18, I decided to talk the students through the rubric, explaining how they can use it to improve performance. I led them through the rubric in the final or penultimate class. During the session, I explained how they might align their essays with the various performance levels/standards. I gave them insights into some of the essays I had assessed in the previous two years; highlighting which practices were poor and which were best. By the end of the autumn term, the first module in which I had both the rubric and an explanation of its application in class saw a huge improvement in student output as set out in the section above. The results have been the best I have ever had. As the standards have improved, so have the grades. As stated above, I have been able to achieve step-marking in the distinction grade while improving standards generally.

I have also noticed that even where a rubric is not used but the teacher talks to the students about the assessments and their expectations of them, students perform better than where there is no conversation at all. In 2017/18, while I did not provide the rubric to the control-module, I discussed the assessment with the students, explaining practices which they might find helpful. As demonstrated above, there was lower failure rate and improvement generally across board. I can conclude therefore that assessment criteria ought to be explained much better to students if their performance is to improve. However, I think that having a rubric and student engagement with it is the best option.

I have also noticed that many students tend to perform well; in the merit bracket. These students would like to improve but are unable to decipher how to do so. These students, in particular, find the rubric very helpful.

In addition, Wolf and Stevens (2007) observe that rubrics are particularly helpful for international students whose assessment systems may have been different, though no less valid, from that of the system in which they have presently chosen to study. Such students struggle to understand what is expected of them and so, may fail to attain the best standards/performance levels that they could for lack of understanding of the assessment practices. A large proportion of my students are international, and I think that they have benefitted from having the rubric; particularly when they are invited to engage with it actively.

Finally, the rubric has improved my feedback skills tremendously. I am able to express my observations and grades in terms well understood both by myself and my students. The provision of feedback is no longer a chore or a bore. It has actually become quite enjoyable for me.

FOLLOW UP

On publishing the rubric to students:

I know that blackboard gives the opportunity to embed a rubric within each module. I have only so far uploaded copies of my rubric onto blackboard for the students on each of my modules. I have decided to explore the blackboard option to make the annual upload of the rubric more efficient. I will also see if the blackboard offers opportunities to improve on the rubric which will be a couple of years old by the end of this academic year.

On the Implementation of the rubric:

I have noted, however, that it takes about half an hour to explain the rubric to students for each module which eats into valuable teaching time. A more efficient method is required to provide good assessment insight to students. This Summer, I will liaise with my colleagues, as the examination officer, to discuss the provision of a best practice session for our students in relation to their assessments. At the session, students will also be introduced to the rubric. The rubric can then be paired with actual illustrations which the students can be encouraged to grade using its content. Such sessions will improve their ability to self-evaluate which is crucial both to their learning and the improvement of their outputs.

LINKS

• K. Wolf and E. Stevens (2007) 7(1) Journal of Effective Teaching, 3. https://www.uncw.edu/jet/articles/vol7_1/Wolf.pdf
• H Andrade, Y Du and K Mycek, ‘Rubric-Referenced Self- Assessment and Middle School Students’ Writing’ (2010) 17(2) Assessment in Education: Principles, Policy &Practice, 199 https://www.tandfonline.com/doi/pdf/10.1080/09695941003 696172?needAccess=true
• S Brookhart, How to Create and Use Rubrics for Formative Assessment and Grading (Association for Supervision & Curriculum Development, ASCD, VA, 2013).
• Turnitin, ‘Rubrics and Grading Forms’ https://guides.turnitin.com/01_Manuals_and_Guides/Instru ctor_Guides/Turnitin_Classic_(Deprecated)/25_GradeMark
/Rubrics_and_Grading_Forms
• Blackboard, ‘Grade with Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Grade_with_Rubrics
• Blackboard, ‘Import and Export Rubrics’ https://help.blackboard.com/Learn/Instructor/Grade/Rubrics
/Import_and_Export_Rubrics

Interdisciplinary teaching: Science in Culture

Professor Nick Battey, School of Biological Sciences
n.h.battey@reading.ac.uk

Overview

12402A module for Part Three students was created by a collaborative effort between the Department of English Literature, the Department of History, and the School of Biological Sciences (SBS), called Science in Culture. This module was well-received by students, who found value in obtaining the perspective of disciplines other than their own, and experiencing teaching and learning methods outside the norm of their previous study.

Objectives

  • Offer a truly disciplinary module allowing students from English Literature, History, and SBS to study alongside one another, learning through the diverse teaching methods of science and the humanities.
  • Develop in students a broader, critical understanding of the precepts of science.
  • Provide an integrated view of science (with emphasis on Biological Sciences) within culture.

Context

The development of this collaborative module grew out of an Arts and Humanities Research Council sponsored project which looked at the value of literary and historical study of biology to students of biological sciences. An element of this was a workshop, ‘Cultivating Common Ground’, which aimed to foster interdisciplinary discussion between biology and the humanities. One of the key findings of the scoping study was that it would be beneficial to develop at least one module that taught both biology and humanities students alongside one another in an interdisciplinary way.

Implementation

The module was developed over a number of years by staff from SBS, English and History. The module designers from the different disciplines were determined to ensure that what was developed was a truly interdisciplinary module, breaking down the perceived divide between the sciences and the humanities, and showing how the different approaches and bodies of knowledge bear on the same questions.

The module is taught over one term. Students receive lectures and partake in seminar discussions on a historical, literary, or scientific concept, and also conduct lab work on subjects related to those explored in the lectures. As an example of this, in lab work students will identify a mutated gene, and explore the use of mutations for understanding how genes work. This topic of mutation can then be explored in its literary and historic contexts. The difference that exists between the scientific, literary and historical approaches can then be explored as a cultural challenge. From the ‘Cultivating Common Ground’ workshop, consensus had emerged that interdisciplinary learning and teaching needed to be ‘narrow and deep’. As a result, the module focuses on a defined set of ‘problems’, rather than ‘grand themes’, allowing a deeper exploration thereof, and situation of this within the cultural dynamics and methods of science.

In order to ensure students experience different ways of learning, students were given a variety of tasks, ranging from interpreting poems or discussing the history of a scientific process, which they recorded in a learning journal, these being marked and receiving feedback from tutors each week. While the completion of this task over the course of the module was an aspect of the summative assessment, the weekly feedback provided regular formative feedback to students. A focus on formative assessment was recognised as being important by the scoping study, as students on such an interdisciplinary module would require greater opportunity to learn what was expected of them. Linking formative assessment to the summative assessment ensured that students would be motivated to engage and receive valuable feedback. Students taking the module as part of a History or English Literature degree, for whom the module was worth 20 credits, rather than 10, also wrote a summative essay.

Impact

The project was successful in delivering a truly interdisciplinary module, with collaboration between the School of Biological Studies, the Department of English and the Department of History. The module was well-received by students, who reported that they appreciated the value of getting different perspectives on their disciplines.

Reflections

The greatest challenge in creating this module was achieving interdisciplinarity, as the teaching and learning strategies best suited to the individual disciplines were not necessarily suited to the teaching of an interdisciplinary module. That the module was in development for a number of years reflects the difficulty that developing an interdisciplinary approach, and this was made increasingly difficult by the paucity of existing literature on the topic from which to draw suitable practices. As a result, there had to be a number of iterative developments in order to create a module that could be delivered in a way which best achieved its learning outcomes.

Interdisciplinarity also provided a challenge with regards marking of assessments. As each discipline has different expectations, it was necessary for marking to be a collaborative process, with compromise being reached between assessors.

While the provision of multiple opportunities for formative assessment and feedback had value, given that it helped introduce students to the other disciplines, and encouraged deep learning, the process was strenuous, for both students and staff.

As the module was interdisciplinary, this meant that students had to engage with topics and processes outside the norm of their previous academic study. As a result, despite their enjoyment and high attainment, students on the module did find it challenging.

Follow up

Following the successful running of the module during the 2014-15 academic year, the module has been offered again, with slight revisions. One of the revisions has been in assessment, with students producing a report at the end of the module, rather than creating a learning portfolio over the course of the module, thus somewhat reducing the workload of staff and students. A group presentation has also been introduced, providing a different type of assessment, and making interdisciplinary collaborative group work part of summative assessment.

Links

Reviewing assessment and feedback in Part One: getting assessment and feedback right with large classes

Dr Natasha Barrett, School of Biological Sciences
n.e.barrett@reading.ac.uk
Year(s) of activity: 2010/11
Overview

Objectives

  • Review the quantity, type and timing of assessments carried out in compulsory modules taken by students in the School of Biological Sciences.
  • Recommend better practices for assessment and feedback.

Context

The massification and marketisation of Higher Education means that it is increasingly important that the University of Reading perform well in term of student satisfaction and academic results. The National Student Surveys between 2005 and 2011 and the Reading Student Survey of 2008 and the National Student Survey both indicated that assessment and feedback were areas in which the University of Reading and the School of Biological Sciences needed to improve.

Implementation

An evaluation of online systems of peer assessment for group work

Cathy Hughes and Heike Bruton, Henley Business School
catherine.hughes@reading.ac.uk

Overview

Online peer assessment systems were evaluated for their suitability in providing a platform to allow peer assessment to be conducted in the context of group work.

Objectives

  • To establish the criteria against which peer assessment systems should be evaluated.
  • To evaluate the suitability of online systems of peer assessment.
  • To provide a way forward for Henley Business School to develop peer assessment for group work.

Context

There are many well-documented benefits of group work for students. Given the recognised issue that members of a group may not contribute equally to a task, and that it can be difficult for tutors to accurately judge the contributions made by individuals within a group, this presents a context in which peer assessment can be utilised, allowing students to assess the process of group work. Within Henley Business School, Cathy Hughes has utilised peer assessment for group work in Real Estate and Planning, and developed a bespoke web-based system to facilitate this. As this system was not sustainable, the project was funded to evaluate the suitability of other web-based peer assessment systems for use at the University.

Implementation

By first establishing how academics across the University use peer assessment in a range of subjects, it would be possible to establish the criteria against which available online systems of peer assessment for group work could be evaluated. This was done by performing a series of interviews with academics who already used peer assessment, these volunteering after a call for respondents was made through the T&L distribution list. The eleven interviewees were drawn from across seven departments. The interviews revealed that five separate peer assessment systems were in use across the University. These systems had, with one exception, been in use for four years or fewer. Peer assessment at the University of Reading has been utilised at all Parts, for a range of group sizes (between three and ten depending on the task being performed). While a range of credits were affected by peer assessment (between 1 and 20), no module used peer assessment to contribute 100% of the final mark, though in one case it did contribute 90% of the final mark.

With peer assessment of group work, students may be required to mark their peers against set criteria, or in a more holistic manner whereby students award an overall mark to each of the others in their group. Given the subjective nature of the marking process, peer assessment can be open to abuse, and so interviewees stressed the need for them to be able to check and moderate marks. All interviewees stated that they collated evidential material which could be referred in case of dispute.

All systems which were in use generated numerical data on an individual’s performance in group work, but with regard to feedback there were differences in what users required. Some users of peer assessment used the numerical data to construct feedback for students, and in one case students provided their peers with anonymised feedback.

It was apparent from interviews that performing peer assessment requires a large amount of support to be provided by staff.  Other than the system that was in use in Henley Business School and the Department of Chemistry, all systems had students fill out paper forms, with calculations then being performed manually or requiring data to be input into a spreadsheet for manipulation.  This high workload reflected a need to disseminate online peer assessment, in order to reduce the workload of those already conducting peer assessment, and to attempt to lower the barrier to entry for others interested in peer assessment, but unable to accept the increased workload.

With the input from interviewees, it was possible to put together criteria for evaluation of online peer assessment systems:

  1. Pedagogy:
    • Any systems must provide a fair and valid method for distinguishing between contributions to group work.
  2. Flexibility:
    • Peer assessment is used in different settings for different types of group work. The methods used vary on several dimensions, such as:
      1. Whether holistic or criteria based.
      2. The amount of adjustment to be made to the group mark.
      3. The nature of the grading required by students, such as use of a Likert scale, or splitting marks between the group
      4. Whether written comments are required from the students along with a numerical grading of their peers.
      5. The detail and nature of feedback that is given to students such as: grade or comment on group performance as a whole; the performance of the student against individual criteria; further explanatory comments received from students or given by academics.
    • Therefore any system must be flexible and capable of adapting to these environments.
  3. Control:
    • Academics require some control over the resulting marks from peer assessment. While the online peer assessment tool will calculate marks, these will have to be visible to tutors, and academics have to have the ability to moderate these.
  4. Ease of use:
    • Given the amount of work involved in running peer assessment of group work, it is necessary for any online system to be both easy to use by staff and reduce their workload. The other aspect of this is ease of use for the student. The current schemes in use may be work-intensive for staff, but they do have the benefit of providing ease of use for students.
  5. Incorporation of evidence:
    • The collection of evidence to support and validate marks provided under peer assessment would ideally be part of any online system.
  6. Technical integration and support:
    • An online peer assessment system must be capable of being supported by the University in terms of IT and training
  7. Security:
    • Given the nature of the data, the system must be secure.

Four online peer assessment systems were analysed against these criteria: iPeer, SPARKplus, WebPA, and the bespoke peer assessment system created for use in Real Estate and Planning.

Findings

A brief overview of the findings is as follows:

iPeer

While iPeer can be used to collect data for the purposes of evaluation, unlike other systems evaluated the manipulation and interpretation of said data is left to the tutor, thus maintaining some of the workload that it was hoped would be avoided. While its ease of use was good, for staff and students, there were limits to what it was possible to achieve using iPeer, and supporting documentation was difficult to access.

SPARKplus

SPARKplus is a versatile tool for the conduct of online peer assessment, allowing students to be marked against specific criteria or in a more holistic manner, and generating a score based upon their peer assessed contribution to group work and the tutor’s assessment of what the group produces. There were, however, disadvantages: SPARKplus does not allow for the gathering of additional evidential material, and it was difficult at the time of the evidence gathering to find information about the system. While SPARKplus is an online system, it is not possible to incorporate it into Blackboard Learn that might have clarified its suitability.

WebPA

For WebPA there was a great deal of documentation available, aiding its evaluation. It appeared to be easy to use, and is able to be incorporated into Blackboard Learn. The main disadvantages of using WebPA was that it does not allow evidential data to be gathered, and that there is no capacity for written comments to be shared with students, as these are only visible to the tutor.

Bespoke REP system

The bespoke online peer assessment system developed within Real Estate and Planning and also used in the Department of Chemistry is similar to WebPA in terms of the underpinning scoring algorithm, and has the added advantage of allowing the collection of evidential material. Its main disadvantage is that it is comparatively difficult to configure, requiring a reasonable level of competence with Microsoft Excel. Additionally, technical support for the system is reliant on the University of Reading Information Technology Services.

Reflections

Developing the use of the interactive whiteboard for initial teacher trainees (2011-12)

Catherine Foley, Institute of Education
c.m.foley@reading.ac.uk

Overview

With interactive whiteboards becoming a well-established feature of English primary schools classrooms over the last decade, it is vital that the primary Post-Graduate Certificate of Education (PGCE) programme taught at the University of Reading’s Institute of Education prepares it graduates to be confident and competent in using interactive whiteboard technology in the classroom, including making pedagogically sound, informed decisions about when, when not, and how the interactive whiteboard can enhance learning.

Objectives

    • Explore how trainees can be supported to use the interactive whiteboard in their teaching of mathematics.
    • Gain an informed view of the entry- and exit-level interactive whiteboard skills and understanding of trainees to inform future programme planning.
    • Ensure that the trainee voice is incorporated into developmental planning.
    • Make recommendations regarding embedding the use of interactive whiteboard technology into our wider initial teacher training provision.

Implementation

Initial data collection was conducted through a questionnaire, which was administered towards the end of the trainees’ first week on the programme. This questionnaire was used to gather data on skills and competencies with regards interactive whiteboard technology.

The results of the initial questionnaire revealed that trainees on the programme generally had little or no experience of using interactive whiteboard technology, and that confidence levels for using the interactive whiteboard for general teaching and learning, and specifically within mathematics lessons, were low. The questionnaire had also asked trainees to rank statements in order to indicate the most important to meet their needs. The most preferred statement was that trainees would like support for the skills of how to use an interactive whiteboard. Second was that the use of the interactive whiteboard for teaching and learning be modelled within sessions.

On the basis of the questionnaire results, the following action plan was discussed and agreed with the programme director:

  1. Modelling of interactive whiteboard use throughout taught mathematics sessions. Where interactive whiteboard use was modelled, the ‘stepping out’ technique, as described in Lunenberg et al., was used explicitly to focus trainee’ attention on how the interactive whiteboard has been used, and more importantly, why and to what effect.
  2. Optional workshops during free-time within Autumn and Spring Terms.  These were aimed to ensure a basic level of skills, tied in with the interactive functions most likely to have an impact.  These workshops were limited to 10 trainees, to allow greater access to the interactive whiteboard and less pressure on ‘getting it right’.  The skills addressed during these workshops were based on a combination of student requests, the experience of the project leader, and those outlined in Beauchamp and Parkinson.
  3. Provision for peer sharing of resources created on school experience later in the programme.  In workshops, trainees who had developed interactive whiteboard skills while on placement were invited to share their expertise with other trainees.
  4. Opportunities for peer modelling within starter activities.  Trainees were encouraged to use the interactive whiteboard where appropriate in the presentation of starter activities to their peers, which occurs on a rolling programme throughout the module.

At the end of the module a follow-up questionnaire was administered. This contained a mixture of identical questions to the initial questionnaire, to allow comparison with the results that were gained at the beginning of the programme, and items designed to evaluate the different forms of support that had been provided.

Reflections

Trainees had, by the conclusion of the module, improved their experience with the use of interactive whiteboards, their confidence in doing so, their preparedness to use interactive whiteboard technology for the teaching of mathematics, and increased the level of skill they possessed in writing, manipulating shapes or images, and inserting children’s work or photographs.

It was possible as a result of the project to make the following recommendations for the Institute of Education, which may be useful for related subjects across the University of Reading:

  1. If staff are expected to integrate modelling of appropriate use of interactive whiteboards into their practice, they will need both technical and peer support in order to develop their own confidence. This could be tackled through teaching and learning seminars, practical workshops, software provision and technician time, in much the same way as the project itself supported trainees.
  2. Some of the technical skills could be integrated into ICT modules, allowing subject modules to focus on the most effective pedagogy within their subject.
  3. Primary programmes could consider some kind of formative collaborative tasks to develop and review interactive whiteboard-based activities within subject areas.
  4. The interactive whiteboard provisions in schools could be audited in order to ensure that the Institute of Education’s software and hardware provision is appropriately matched to what trainees will encounter, and incorporated a request for supervising students to comment on their tutees’ interactive whiteboard use as a quality assurance check.
  5. Time support so that trainees reach a basic level of confidence with the use of interactive whiteboard technology before their first school placement.

Links and Resources

Mieke Lunenberg, Fred Korthagen, and Anja Swennen (2007): The teacher educator as role model.  Teaching and Teacher Education, 23 (5)
Gary Beauchamp and John Parkinson (2005): Beyond the ‘wow’ factor: developing interactivity with the interactive whiteboard.  School Science Review, 86 (316)