Piloting General Practice (GP) experiential learning for MPharm Year 3 students

Catherine Langran, Lecturer in Pharmacy Practice, School of Pharmacy

Daniel Mercer & Selen Morelle, MPharm Part 4 students, School of Pharmacy

Background

Throughout the Masters of Pharmacy degree (MPharm) students undertake experiential learning in hospital and community pharmacies. Experiential learning through placements is an important approach to teaching and learning; providing a safe learning environment for students, bridging the gap between theory and practice, and encouraging independent learning and reflective practice.

In 2016, the National Health Service (NHS) launched a programme “Building the General Practice Workforce” creating a new career pathway for pharmacists performing clinical tasks in a primary care setting. Over the past 3 years a steadily increasing number of pharmacists are pursuing this career option, and this is now a graduate opportunity for our MPharm students.  It is therefore crucial that Reading School of Pharmacy provides undergraduate students with an opportunity to experience this new role to give students more insight into their career options, encourage professional and personal development, and boost employability.

This collaborative partnership project piloted placements within GP practices for Part 3 pharmacy students to assess the students’ perceptions and evaluate the benefits and practicality of the placements.

Method

59 Part 3 students (46% of the cohort) attended a voluntary session in November 2018, prior to submitting the PLanT application. This session demonstrated a high level of student interest in this placement opportunity and also involved discussion of the practicalities (e.g. placement length, positioning within timetable, location) and perceived advantages of offering GP placements.

Following a successful bid to the PLanT fund, a second voluntary session was attended by 22 students who collaboratively worked with the project lead to determine the process of student recruitment and allocation to placements, define the placement learning outcomes, placement activities, evaluation methods and how to collect feedback. Subsequently, the two project lead students worked with the lead academic to construct an online application process, review student applications, finalise the student handbook and evaluate the student feedback.

The main objectives of this project were:

  • To evaluate the benefits of undertaking the GP placements for MPharm students.
  • To evaluate the placement provider’s feedback on the acceptability, practicality and scalability of providing placements for students.

Five GP practices were recruited to take part in the pilot, located in Reading and London. From April-June 2019, a total of 37 part 3 MPharm students completed a half to one day placement in one of five GP practices. Students predominately shadowed the GP Pharmacist within a clinic environment, and others had the opportunity to shadow GPs, nurses, physician associates and reception teams to provide a greater understanding on how General Practices function as a business.

Data was collected via student completion of online questionnaires pre and post GP placements to compare their:

  • Understanding of the role of GP pharmacists and how GP surgeries work (with 0=no knowledge to 10=complete knowledge)
  • Confidence building rapport and being empathetic when talking to patients (0=no confidence to 10=fully confident)

Students also decided that they would like to prepare and deliver a short 5-minute verbal presentation to their peers and the project group to share experiences and insights from their GP placement.

We also collected feedback from placement providers after completion of the placements.

Results

37 students completed the pre-placement questionnaire, and 30 students completed the post-placement questionnaire. Analysis of the data shows that the students who undertook the placement displayed a significant improvement in their understanding of the GP pharmacist role and the structure and running of a GP practice. A moderate increase in empathy and building rapport was also seen.

Students’ evaluation of the GP placements were overwhelmingly positive, highlighting improved knowledge of the role of GP pharmacists and having gained insight into their potential career choices:

 

In their peer presentations, students described key learning points:

–  An understanding of how different health care professionals skills can work together to offer best care to patients

– The value of observing pharmacist consultations with patients, and reflecting on how treatment decisions are made

– An increased understanding of the options available to them after graduation, enabling them to make a more informed career choice.

Feedback from placement providers showed they found hosting the placement enjoyable/rewarding, they felt the students were enthusiastic, and the organisation/communication from the university was excellent.

Limitations

Whilst the cohort of students who attended the placement days appear to have improved their understanding of GP pharmacy, we are aware that the students undertook the placements voluntarily. These students had a desire to explore the role of GP pharmacists and this implies that they had an interest in the area prior to undertaking the placement. Therefore, opinions may be favoured towards the role.

Impact

The student co-design element ensured this pilot delivered an authentic and valuable experience, with high levels of student engagement.

As a result of this pilot, funding has been secured from our Head of Department to implement GP placements for all part 3 students (cohort size 106) from December 2019. Working partnerships have been established with the 5 GP practices and this has now been expanded to 16 GP practices for 2019/2020. Embedding GP placements for our students will have a positive impact on the MPharm re-accreditation by our regulators the General Pharmaceutical council in March 2020.

There is the potential for this project to have a long term impact on NSS and employability which will be explore in June 2020. Offering these placement sets us apart from other Schools of Pharmacies, and is a key selling point in our new UCAS brochure.

‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Rita Balestrini and Elisabeth Koenigshofer, School of Literature and Languages, r.balestrini@reading; e.koenigshofer@reading.ac.uk

Overview

Working in collaboration with two Final Year students, we designed two ‘flexible’, ‘minimalist’ rubric templates usable and adaptable across different languages and levels, to provide a basis for the creation of level specific, and potentially task specific, marking schemes where sub-dimensions can be added to the main dimensions. The two marking templates are being piloted this year in the DLC. The project will feature in this year’s TEF submission.

Objectives

Design, in partnership with two students, rubric templates for the evaluation and feedback of writing tasks and oral presentations in foreign languages which:

  • were adaptable across languages and levels of proficiency
  • provided a more inclusive and engaging form of feedback
  • responded to the analysis of student focus group discussions carried out for a previous TLDF-funded project

Context

As a follow-up to a teacher-learner collaborative appraisal of rubrics used in MLES, now DLC, we designed two marking templates in partnership with two Final Year students, who had participated in the focus groups from a previous project and were employed through Campus Jobs. ‘Acknowledgement of effort’, ‘encouragement’, ‘use of non-evaluative language’, ‘need for and, at the same time, distrust of, objective marking’ were recurrent themes that had emerged from the analysis of the focus group discussions and clearly appeared to cause anxiety for students.

Implementation

We organised a preliminary session to discuss these findings with the two student partners. We suggested some articles about ‘complexity theory’ as applied to second language learning, (Kramsch, 2012; Larsen-Freeman, 2012; 2015a; 2015b; 2017) with the aim of making our theoretical perspective explicit and transparent to them. A second meeting was devoted to planning collaboratively the structure of two marking schemes for writing and presentations. The two students worked independently to produce examples of standard descriptors which avoided the use of evaluative language and emphasised achievement rather than shortcomings. At a third meeting they presented and discussed their proposals with us. At the last meetings, we continued working to finalise the templates and the two visual learning charts they had suggested. Finally, the two students wrote a blog post to recount their experience of this collaborative work.

The two students appreciated our theoretical approach, felt that it was in tune with their own point of view and that it could support the enhancement of the assessment and marking process. They also found resources on their own, which they shared with us – including rubrics from other universities. They made valuable suggestions, gave us feedback on our ideas and helped us to find alternative terms when we were struggling to avoid the use of non-evaluative language for our descriptors. They also suggested making use of some visual elements in the marking and feedback schemes in order to increase immediateness and effectiveness.

Impact

The two marking templates are being piloted this year in the DLC. They were presented to colleagues over four sessions during which the ideas behind their design were explained and discussed. Further internal meetings are planned. These conversations, already begun with the previous TLDF-funded project on assessment and feedback, are contributing to the development of a shared discourse on assessment, which is informed by research and scholarship. The two templates have been designed in partnership with students to ensure accessibility and engagement with the assessment and feedback process. This is regarded as an outstanding practice in the ‘Assessment and feedback benchmarking tool’ produced by the National Union of Students and is likely to feature positively in this year’s TEF submission.

Reflections

Rubrics have become mainstream, especially within certain university subjects like Foreign Languages. They have been introduced to ensure accountability and transparency in marking practices, but they have also created new problems of their own by promoting a false sense of objectivity in marking and grading. The openness and unpredictability of complex performance in foreign languages and of the dynamic language learning process itself are not adequately reflected in the detailed descriptors of the marking and feedback schemes commonly used for the objective numerical evaluation of performance-based assessment in foreign languages. As emerged from the analysis of focus group discussions conducted in the department in 2017, the lack of understanding and engagement with the feedback provided by this type of rubrics can generate frustration in students. Working in partnership with them, rather than simply listening to their voices or seeing them as evaluators of their own experience, helped us to design minimalist and flexible marking templates, which make use of sensible and sensitive language, introduce visual elements to increase immediateness and effectiveness, leave a considerable amount of space for assessors to comment on different aspects of an individual performance and provide ‘feeding forward’ feedback. This type of ‘partnership’ can be challenging because it requires remaining open to unexpected outcomes. Whether it can bring about real change depends on how its outcomes are going to interact with the educational ecosystems in which it is embedded.

Follow up

The next stage of the project will involve colleagues in the DLC who will be using the two templates to contribute to the creation of a ‘bank’ of descriptors by sharing the ones they will develop to tailor the templates for specific stages of language development, language objectives, language tasks, or dimensions of student performance. We also intend to encourage colleagues teaching culture modules to consider using the basic structure of the templates to start designing marking schemes for the assessment of student performance in their modules.

Links

An account written by the two students partners involved in the project can be found here:

Working in partnership with our lecturers to redesign language marking schemes

The first stages of this ongoing project to enhance the process of assessing writing and speaking skills in the Department of Languages and Cultures (DLC, previously MLES) are described in the following blog entries:

National Union of Students 2017. The ‘Assessment and feedback benchmarking tool’ is available at:

http://tsep.org.uk/wp-content/uploads/2017/07/Assessment-and-feedback-benchmarking-tool.pdf

References

Bloxham, S. 2013. Building ‘standard’ frameworks. The role of guidance and feedback in supporting the achievement of learners. In S. Merry et al. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

Bloxham, S. and Boyd, P. 2007. Developing effective assessment in Higher Education. A practical guide. Maidenhead: McGraw-Hill International.

Bloxham, S., Boyd, P. and Orr, S. 2011. Mark my words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education 36 (6): 655-670.

Bloxham, S., den-Outer, B., Hudson J. and Price M. 2016. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment in Higher Education 41 (3): 466-481.

Brooks, V. 2012. Marking as judgement. Research Papers in Education. 27 (1): 63-80.

Gottlieb, D. and Moroye, C. M. 2016. The perceptive imperative: Connoisseurship and the temptation of rubrics. Journal of Curriculum and Pedagogy 13 (2): 104-120.

HEA 2012. A Marked Improvement. Transforming assessment in HE. York: The Higher Education Academy.

Healey, M., Flint, A. and Harrington K. 2014. Engagement through partnership: students as partners in learning and teaching in higher education. York: The Higher Education Academy.

Kramsch, C. 2012. Why is everyone so excited about complexity theory in applied linguistics? Mélanges 33: 9-24.

Larsen-Freeman, D. 2012. The emancipation of the language learner. Studies in Second Language Learning and Teaching. 2(3): 297-309.

Larsen-Freeman, D. 2015a. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Larsen-Freeman, L. 2015b. Complexity Theory. In VanPatten, B. and Williams, J. (eds.) 2015. Theories in Second Language Acquisition. An Introduction. New York: Routledge: 227-244.

Larsen-Freeman, D. 2017. Just learning. Language Teaching 50 (3): 425-437.

Merry, S., Price, M., Carless, D. and Taras, M. (eds.) 2013. Reconceptualising feedback in Higher Education. Abingdon: Routledge.

O’Donovan, B., Price, M. and Rust, C. 2004. Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education 9 (3): 325-335.

Price, M. 2005. Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education 30 (3): 215-230.

Sadler, D. R. 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment and evaluation in Higher Education 34 (2): 159-179.

Sadler, D. R. 2013. The futility of attempting to codify academic achievement standards. Higher Education 67 (3): 273-288.

Torrance, H. 2007. Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education 14 (3): 281-294.

VanPatten & J. Williams (Eds.) 2015. Theories in Second Language Acquisition, 2nd edition. Routledge: 227-244.

Yorke, M. 2011. Summative assessment dealing. Dealing with the ‘Measurement Fallacy’. Studies in Higher Education 36 (3): 251-273.

Sharing the ‘secrets’: Involving students in the use (and design?) of marking schemes

Rita Balestrini, School of Literature and Languages, r.balestrini@reading.ac.uk

Overview

Between 2016 and 2018, I led a project aiming to enhance the process of assessing foreign language skills in the Department of Modern Languages and European Studies (MLES). The project was supported by the Teaching and Learning Development Fund. Its scope involved two levels of intervention: a pilot within one Part I language module (Beginners Italian Language) and other activities involving colleagues in all language sections and students from each year of study. The project enabled the start of a bank of exemplars for the assessment of a Part I language module; promoted discussion on marking and marking schemes within the department; and made possible a teacher-learner collaborative appraisal of rubrics.

Objectives

  • To enhance Beginners Italian Language students’ understanding of rubrics and their assessment literacy
  • To increase their engagement with the assessment process and their uptake of feedback
  • To engage MLES students as agents of change in the assessment culture of the department
  • To stimulate innovation in the design of rubrics within the MLES Language Team and contribute to develop a shared discourse on assessment criteria and standards informed by the scholarship of assessment

Context

In recent years, there has been an increasing demand to articulate explicitly the standards of assessment and to make them transparent in marking schemes in the form of rubrics, especially in Foreign Languages. It is widely held that the use of rubrics increases the reliability of assessment and fosters autonomy and self-regulation in students. However, it is not uncommon that students do not engage with the feedback that rubrics are supposed to provide. In 2016, the language team of the Department of Modern Languages and European Studies started to work at the standardisation and enhancement of marking schemes used to assess language skills. The aim of this multi-layered project was to make a positive contribution to this process and to pilot a series of activities for the enhancement of foreign language assessment.

Implementation

  • Review of research literature and scholarly articles on the use of standard-based assessment, assessment rubrics, and students-derived marking criteria.
  • Presentation on some of the issues emerged from the review at a School T&L Away Day on assessment attended by the MLES language team (April 2017) and at a meeting of the Language Teaching Community of Practice (November 2017).
  • Organisation of a ‘professional conversation’ on language assessment, evaluation and marking schemes as a peer review activity in the School of Literature and Languages (SLL). The meeting was attended by colleagues from MLES and CQSD (February 2018).
  • 2016-17 – Two groups of students on the Beginners Italian Language module were asked for permission to use exemplars of their written and oral work for pedagogic practice and research. Ten students gave their informed consent.
  • Collection of written and oral work, double-marked by a colleague teaching one of the groups.
  • 2017-2018 – Organization of two two-hour workshops on assessment for a new cohort of students. Aim: To clarify the link between marking criteria, learning outcomes and definitions of standards of achievement of the module. An anonymised selection of the exemplars collected the previous year was used a) ‘to show’ the quality of the standards described in the marking schemes and b) for marking exercises.
  • 2017 – Organisation of three focus groups with students – one for each year of study – to gain insights into their perspectives on the assessment process and understanding of marking criteria. The discussions were recorded and fully transcribed.
  • The transcriptions were analysed by using a discourse analysis framework.
  • Some issues emerged from the analysis: atomistic approach of rubrics; vagueness of the standards; subjectivity of the evaluation; problematic measuring of different aspects of achievement; rating scales anchoring (for a more comprehensive account of the focus groups see the Engage in T&L Blog post Involving students in the appraisal of rubrics for performance-based in Foreign Languages).
  • Developed, in collaboration with three students from the focus groups, a questionnaire on the use of rubrics. The questionnaire was intended to gather future students’ views on marking schemes and their use.

Impact

This multi-layered project contributed to enhance the process of assessing foreign language skills in MLES in different ways.

  • The collection of exemplars for the Beginners Italian Language module proved to be a useful resource that can also be used with future cohorts. The workshops were not attended by all students, but those who did attend engaged in the activities proposed and asked several interesting questions about the standards of achievement described in the marking schemes (e.g. grade definitions; use of terms and phrases).
  • The systematic analysis of the focus groups provided valuable insights into students’ disengagement with marking schemes. It also brought to light some issues that would need to be addressed before designing new rubrics.
  • The literature review provided research and critical perspectives on marking schemes as a tool of evaluation and a tool for learning. It suggested new ways of thinking about marking and rubrics and provided a scholarly basis for potential wider projects. The discussion it stimulated, however different the opinions, was an important starting point for the development of a shared discourse on assessment.

Reflections

The fuzziness of marking students’ complex performance cannot be overcome by simply linking numerical marks to qualitative standard descriptors. As mentioned in a HEA document, even the most detailed rubrics cannot catch all the aspects of ‘quality’ (HEA, 2012) and standards can be better communicated by discussing exemplars. There is also an issue with fixing the boundaries between grades on a linear scale (Sadler, 2013) and the fact that, as Race warns, the dialogue between learners and assessors (Race, HEA) can easily be broken down by the evaluative terms typically used to pin down different standards of achievement. Despite all these pitfalls, in the current HE context, rubrics, if constructed thoughtfully and involving all stakeholders, can benefit learning and teaching.

By offering opportunities to discuss criteria and standards with students, rubrics can help to build a common understanding of how marks are assigned and so foster students’ literacy, especially if their use is supported by relevant exemplars.

The belief that rubrics need to be standardised across modules, levels and years of study makes designing rubrics particularly difficult for ‘foreign languages’. Cultural changes require time and the involvement of all stakeholders, especially where the changes concern key issues that are difficult to address without a shared view on language, language learning and assessment. A thorough discussion of rubrics can provide chances to share ideas on marking, assessment and language development not only between students and staff but also within a team of assessors.

I have tried to engage students in the appraisal of rubrics and to avoid a market research approach to focus groups. It is clear that, if we are committed to make any assessment experience a learning experience and to avoid the potential uneasiness that rubrics can cause students, we need to explore new ways of defining the standards of achievement in foreign languages. Establishing pedagogical partnerships with students seems a good way to start.

Follow up

I will encourage a differentiation of rubrics based on level of language proficiency and a collection of exemplars for other language modules. The natural follow up to this project would be to continue enhancing the rubrics used for evaluation and feedback in languages in the light of the analysis of the focus group discussions and the review of the literature on assessment, ideally with the collaboration of students. Possible connections between the marking schemes used to assess language modules and cultural modules will be explored.

References

HEA, 2012. A Marked Improvement. Transforming assessment in HE. York: Higher Education Academy.

Race, P. Using feedback to help students to learn [online] Available at https://www.heacademy.ac.uk/knowledge-hub/using-feedback-help-students-learn   [accessed on 15/8/2018]

Sadler, D. R. 2013. The futility of attempting to codify academic achievement standards. Higher Education 67 (3): 273-288.

 

Links to related posts

‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Working in partnership with our lecturers to redesign language marking schemes 

Involving students in the appraisal of rubrics for performance-based assessment in Foreign Languages By Dott. Rita Balestrini

Redesigning postgraduate curricula on commercial law through student engaging, research-informed and multidisciplinary pathway programmes

Professor Stavroula Karapapa, School of Law                          s.karapapa@reading.ac.uk

Overview

In 2015/2016, we substantially redeveloped our postgraduate provision in commercial law through the introduction of a pioneering, student-engaging, research-informed and multidisciplinary set of postgraduate pathway programmes. Contrary to the programmes previously in place, the new curriculum is unique in its pathway design allowing students to develop a breadth of commercial law expertise whilst also specialising in their area of interest (for a full list of programmes see here). The project on which this entry reflects has resulted in an innovative curriculum that shaped the identity of Centre for Commercial Law and Financial Regulation (CCLFR) as a centre of excellence on cutting-edge themes of commercial law.

Objectives

  • To redevelop our postgraduate curriculum in commercial law through the introduction of cutting-edge themes of study based on the principles of research-informed and multidisciplinary teaching.
  • To empower student learning, improve student experience, and foster the development of a learning community.
  • To hear the student voice towards the design of the curriculum and to proactively and directly engage ‘students as partners’ in the development and evaluation of the core module for the new programmes.

Context

As often happens in Higher Education, the postgraduate programmes in Commercial Law previously in place were the result of the work of independent colleagues at various points in time, starting in 2011. Modular options reflected this dynamic, and they were also impacted by continuous staffing changes over the years. The pathway programmes are the result of collective effort within the School of Law, effective consultation with students and evaluation of their feedback, and constructive collaboration with colleagues from various Schools and services across the University (including marketing, careers, conversions etc.).

Implementation

Following a review of our PGT provision, we redeveloped our commercial law curriculum on the basis of three pillars:

  1. student feedback (module evaluation forms and ‘graduation’ forms collected since 2011) concerning suggestions for improvement, informal comments from students enrolled in 2015/2016 on ideas for new modules/programmes and engagement of ‘students as partners’ in the development of the core module for the new pathway programmes;
  2. extensive market study carried out by marketing and the (then) PGT Director regarding areas worth expanding on;
  3. expansion of our module offerings through the valuable contribution of numerous colleagues in the School of Law and consultation with various Schools across the University that agreed to open up relevant modules, effectively enhancing multidisciplinarity in our programmes.

Instead of offering numerous programmes with no clear link to each other, we introduced a set of pathway programmes (including 5 new PGT programmes and a redesign of the existing ones) whereby all programmes are centred around one core legal field, International Commercial Law, and students have the option to follow a pathway on a specialist area designed around our research strengths as a School and as a University, essentially building on research-informed teaching. Part of this redesigning process was the revision of the compulsory module for all pathways, LWMTAI-Advanced Issues in Commercial Law, which was based on the engagement of students as partners, drawing on a UoR small-scale research project that was initiated in June 2016, an entry of which is available here and here.

Impact

The collective effort of numerous colleagues in the School of Law and the support from various Schools and services across the University resulted in the development of a pioneering set of pathway programmes, centred around the values of research-informed teaching and multidisciplinarity and developed on the basis of student feedback. The project enhanced student engagement, taking on board student views on the learning design. The redrafting of a core module (LWMTAI) had direct impact on student learning, enabling students to proactively review their own learning process and to develop an increased sense of leadership and motivation. There was also positive correlation between the introduction of new pathways (especially Information Technology and Commerce; Energy Law and Natural Resources) and PGT recruitment. Anecdotal evidence indicates that the conversion rate of existing PGT students to our PGR programme has also increased. Importantly, the redevelopment of our programmes created a distinctive identity for CCLFR as a centre of excellence on cutting-edge themes of commercial law.

Reflections

The success of the redevelopment of our PGT curriculum was based on three pillars:

  1. Collective effort: The redevelopment of the programmes required the engagement of various colleagues from the School of Law who met on numerous occasions to reflect on the programmes and introduced new modules on cutting-edge themes to meet the needs of the new pathway design. This effort exceeded business as usual. An example of such collective effort is the redesign of the core module of the pathway programmes which followed the ‘student as partners’ approach and was implemented with the collaboration of various members of staff from the School of Law.
  2. Student engagement: Unlike what usually happens in higher education with ex post student feedback, the pathway design used that feedback constructively towards designing new programmes, taking into consideration student comments in evaluation forms and also engaging students in the programme design process. Importantly, it was students themselves that proactively informed the curriculum of the core module for all pathway programmes, with their voice having being heard even before the completion of the taught component.
  3. Cutting-edge themes and research-informed teaching: At the heart of student feedback was the desire to increase the number of modular offerings from other Schools and Departments, effectively to enhance the multidisciplinary approach that was already in place. Introducing more modules from other Schools to our curriculum on the basis of their relevance and appropriateness to our pathways has become a learning process to us as educators in that it has resulted in dynamic synergies and an innovative curriculum as end-result of the exercise.

Links

Details on our new pathway programmes are available here: http://www.reading.ac.uk/law/pg-taught/law-pgt-courses.aspx

Involving students in the appraisal of rubrics for performance-based assessment in Foreign Languages By Dott. Rita Balestrini

Context

In 2016, in the Department of Modern Languages and European Studies (DMLES), it was decided that the marking schemes used to assess writing and speaking skills needed to be revised and standardised in order to ensure transparency and consistency of evaluation across different languages and levels. A number of colleagues teaching language modules had a preliminary meeting to discuss what changes had to be made, what criteria to include in the new rubrics and whether the new marking schemes would apply to all levels. While addressing these questions, I developed a project with the support of the Teaching and Learning Development Fund. The project, now in its final stage, aims to enhance the process of assessing writing and speaking skills across the languages taught in the department. It intends to make assessment more transparent, understandable and useful for students; foster their active participation in the process; and increase their uptake of feedback.

The first stage of the project involved:

  • a literature review on the use of standard-based assessment, assessment rubrics and exemplars in higher education;
  • the organization of three focus groups, one for each year of study;
  • the development of a questionnaire, in collaboration with three students, based on the initial findings from the focus groups;
  • the collection of exemplars of written and oral work to be piloted for one Beginners language module.

I had a few opportunities to disseminate some key ideas emerged from the literature review – School of Literature and Languages’ assessment and feedback away day, CQSD showcase and autumn meeting of the Language Teaching Community of Practice. Having only touched upon the focus groups at the CQSD showcase, I will describe here how they were organised, run and analysed and will summarise some of the insights gained.

Organising and running the focus groups

Focus groups are a method of qualitative research that has become increasingly popular and is often used to inform policies and improve the provision of services. However, the data generated by a focus group are not generalisable to a population group as a whole (Barbour, 2007; Howitt, 2016).

After attending the People Development session on ‘Conducting Focus groups’, I realised that the logistics of their organization, the transcription of the discussion and the analysis of the data they generate require a considerable amount of time and detailed planning . Nonetheless, I decided to use them to gain insights into students’ perspectives on the assessment process and into their understanding of marking criteria.

The recruitment of participants was not a quick task. It involved sending several emails to students studying at least one language in the department and visiting classrooms to advertise the project. In the end, I managed to recruit twenty-two volunteers: eight for Part I, six for Part II and eight for Part III. I obtained their consent to record the discussions and use the data generated by the analysis. As a ‘thank you’ for participating, students received a £10 Amazon voucher.

Each focus group lasted one hour, the discussions were entirely recorded and were based on the same topic guide and stimulus material. To open discussion, I used visual stimuli and asked the following question:

  • In your opinion, what is the aim of assessment?

In all three groups, this triggered some initial interaction directly with me. I then started picking up on differences between participants’ perspectives, asking for clarification and using their insights. Slowly, a relaxed and non-threatening atmosphere developed and led to more spontaneous and natural group conversation, which followed different dynamics in each group. I then began to draw on some core questions I had prepared to elicit students’ perspectives. During each session, I took notes on turn-taking and some relevant contextual clues.

I ended all the three focus group sessions by asking participants to carry out a task in groups of 3 or 4. I gave each group a copy of the marking criteria currently used in the department and one empty grid reproducing the structure of the marking schemes. I asked them the following question:

  • If you were given the chance to generate your own marking criteria, what aspects of writing/speaking /translating would you add or eliminate?

I then invited them to discuss their views and use the empty grid to write down the main ideas shared by the members of their group. The most desired criteria were effort, commitment, and participation.

Transcribing and analysing the focus groups’ discussions

Focus groups, as a qualitative method, are not tied to any specific analytical framework, but qualitative researchers warn us not to take the discourse data at face value (Barbour, 2007:21). Bearing this in mind, I transcribed the recorded discussions and chose discourse analysis as an analytical framework to identify the discursive patterns emerging from students’ spoken interactions.

The focus of the analysis was more on ‘words’ and ‘ideas’ rather than on the process of interaction. I read and listened to the discussions many times and, as I identified recurrent themes, I started coding some excerpts. I then moved back and forth between the coding frame and the transcripts, adding or removing themes, renaming them, reallocating excerpts to different ‘themes’.

Spoken discourse lends itself to multiple levels of analysis, but since my focus was on students’ perspectives on the assessment process and their understanding of marking criteria, I concentrated on those themes that seemed to offer more insights into these specific aspects. Relating one theme to the other helped me to shed new light on some familiar issues and to reflect on them in a new way.

Some insights into students’ perspectives

As language learners, students gain personal experience of the complexity of language and language learning, but the analysis suggests that they draw on the theme of complexity to articulate their unease with the atomistic approach to evaluation of rubrics and, at times, also to contest the descriptors of the standard for a first level class. This made me reflect about whether the achievement of almost native-like abilities is actually the standard against which we want to base our evaluation. Larsen-Freeman’s (2015) and Kramsch’s (2008) approach to language development as a ‘complex system’ helped me to shed light on the idea of ‘complexity’ and ‘non-linear relations’ in the context of language learning which emerged from the analysis.

The second theme I identified is the ambiguity and vagueness of the standards for each criterion. Students draw on this theme not so much to communicate their lack of understanding of the marking scheme, but to question the reliability of a process of evaluation that matches performances to numerical values by using opaque descriptors.

The third theme that runs through the discussions is the tension between the promise of objectivity of the marking schemes and the fact that their use inevitably implies an element of subjectivity. There is also a tension between the desire for an objective counting of errors and the feeling that ‘errors’ need to be ‘weighted’ in relation to a specific learning context and an individual learning path. On one hand, there is the unpredictable and infinite variety of complex performances that cannot easily be broken down into parts in order to be evaluated objectively, on the other hand, there is the expectation that the sum of the parts, when adequately mapped to clear marking schemes, results in an objective mark.

Rubrics in general seem to be part of a double discourse. They are described as unreliable, discouraging and disheartening as an instructional tool. The feedback they provide is seen as having no effect on language development as does the complex and personalised feedback that teachers provide. Effective and engaging feedback is always associated with the expert knowledge of a teacher, not with rubrics. However, the need for rubrics as a tool of evaluation is not questioned in itself.

The idea of using exemplars to pin down standards and make the process of evaluation more objective emerges from the Part III focus group discussion. Students considered pros and cons of using exemplars drawing on the same rationales that can be found debated in scholarly articles. Listening to, and reading systematically through, students’ discourses was quite revealing and brought to light some questionable views on language and language assessment that most marking schemes measuring achievement in foreign languages contribute to promote.

Conclusion

The insights into students’ perspectives gained from the analysis of the focus groups suggest that rubrics can easily create false expectations in students and foster an assessment ‘culture’ based on an idea of learning as steady increase in skills. We need to ask ourselves how we could design marking schemes that communicate a more realistic view of language development. Could we create marking schemes that students do not find disheartening or ineffective in understanding how to progress? Rather than just evaluation tools, rubrics should be learning tools that describe different levels of performance and avoid evaluative language.

However, the issues of ‘transparency’ and ‘reliability’ cannot be solved by designing clearer, more detailed or student-friendly rubrics. These issues can only be addressed by sharing our expert knowledge of ‘criteria’ and ‘standards’ with students, which can be achieved through dialogue, practice, observation and imitation. Engaging students in marking exercises and involving them in the construction of marking schemes – for example by asking them how they would measure commonly desired criteria like effort and commitment – offers us a way forward.

References:

Barbour, R. 2007. Doing focus groups. London: Sage.

Howitt, D. 2016. Qualitative Research Methods in Psychology. Harlow: Pearson.

Kramsch, C. 2008. Ecological perspectives on foreign language education. Language Teaching 41 (3): 389-408.

Larsen-Freeman, D. 2015. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Potter, M. and M. Wetherell. 1987. Discourse and social psychology. Beyond attitudes and behaviours. London: Sage.

 

Links to related posts

‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Working in partnership with our lecturers to redesign language marking schemes 

Sharing the ‘secrets’: Involving students in the use (and design?) of marking schemes

Engaging students as partners in the redesign of an existing course curriculum

Dr Despoina Mantzari, School of Law
d.mantzari@reading.ac.uk

Overview

In June 2016 I was awarded a small University of Reading Teaching and Learning grant with the objective to involve a group of ten postgraduate taught students from the School of Law as partners in the process of redesigning the curriculum of a core postgraduate taught module. This entry reflects on the process of engaging students as partners in the redesign of an existing course curriculum. It discusses how insights from the burgeoning literature on students as partners in higher education informed the process and assesses the outcomes of the latter for improving and supporting teaching and learning.

Objectives

  • To listen to the ‘student voice’ before course delivery, by proactively engaging students as partners in the redesign of the module.
  • To co-create learning experiences in collaboration with students that goes beyond the student satisfaction surveys and other ex-post forms of evaluation.
  • To redesign a module so that it is both engaging and empowering.

Context

The module Advanced International Commercial Law Issues (LWMTAI), being a core compulsory module of the new LLM, had to be redesigned so as to fit into the new programme requirements. In doing so, I wanted to listen to the ‘student voice’ before course delivery, by proactively engaging students as partners in the redesign of the module. This exercise departs from current practice in higher education, where ‘student voice’ is largely heard following the completion of the taught component of the module on a Module Evaluation Form.

Implementation

Guided by the values of inclusion and partnership, I first emailed all students enrolled on the module in its pre-revised form (2015-16) and introduced the project and its aims, and invited expressions of interest. In order to further test the modules’ renewed approach to the theoretical framework and other relevant components, I also invited a group of five students who had never been enrolled on the module to participate in the project. In selecting this latter group of five students, I was guided by considerations of diversity, both in terms of ethnic and cultural background as well as prior exposure to commercial law. Inviting all LLM students who had never enrolled on the project would have been inappropriate for the aims of the project and would render it difficult to manage. Both previously used (prior to 2015-16) and revised module description forms (to be introduced in 2016-17) of the module were circulated to both groups along with a questionnaire. All students involved were asked to reflect on the strengths and weaknesses of the module, as reflected in the module description forms, along with other concerns or recommendations they wished to share. These were discussed during a two-hour event, open to all students participating in the project and School of Law staff involved in postgraduate taught and undergraduate Law teaching.

Impact

The project enhanced student motivation and engagement, and fostered the development of a learning community within the School of Law. Students enjoyed their participation in the project and in particular their contribution to the event that followed. They were fascinated by their collaboration with staff and by their active role in critically reviewing the course curriculum.

The project also helped students to review their own learning process and allowed them to develop an increased sense of leadership and motivation. It also increased their confidence to express their views in academic settings. Student involvement facilitated the design of the module in ways that significantly improved it.

The project had a transformative effect on the way I perceive my role as an educator and the boundaries thereof.

Reflections

Three key factors contributed to the project’s success:

First, the fact that I ‘institutionalised’ the project by applying for a University of Reading Teaching and Learning Small Research Grant not only allowed me to fund the activities, but also raised the profile of the project in the eyes of both students and staff.

Second, the careful selection of those elements of the curriculum redesign that would be part of the student-staff partnership. I opted for a model of interaction where students are given limited choice and influence. The reason for this related to the nature of the project, which concerned the redesign of an existing module in its entirety. When engaging students as partners, reciprocity cannot always be fulfilled, as high-stake issues of module redesigns, such as the theoretical framework or methods of assessment cannot be entirely handed over to students. Students may find themselves confused if a tutor hands over total control of such an important element without preparation or guidance. Such practice may jeopardise the gatekeeper function of the educator.

The third element went to the heart of student as partner practice: how many students to involve in the project, and by which means. The literature suggests that students as partners can involve work with individuals, small groups, and situations where students are invited to become partners, or even elected or selected. While the literature has drawn attention to the potential benefits of whole cohort approaches, it may be difficult, impossible, or even undesirable in some contexts to involve all students at all times. In this case, a whole-cohort approach could not be adopted, as some students enrolled in the module in its pre-revised form had already left the University. Furthermore, selecting students could potentially undermine the values of inclusion, respect and responsibility that underpin the students as partners approach. Meaningful partnership requires a high level of equality and contribution from partners, and that would be jeopardised by implementing an approach that would invite to the project only student that the module convenor deemed suitable to participate.