How should MSc Placements be assessed?  Gathering the views of students to inform assessment

How should MSc Placements be assessed? Gathering the views of students to inform assessment

 

By: Paul Jenkins, School of Psychology & Clinical Language Sciences, p.jenkins@reading.ac.uk
two women sitting on stairs having a conversation
Image credit: Buro Millennial on Pexels.

Overview

The School of Psychology and Clinical Language Sciences (PCLS) offers several postgraduate degree programmes, nearly all of which include a placement element.  Getting the assessment right is an important challenge to fairly evaluating students on placement.  As part of an ongoing review of programmes with placement components, a piece of work was commissioned to look at how placements should be assessed within PCLS.

Objectives

The primary aims of this project were to explore:

  • What elements of placements MSc students felt were important to assess; and
  • How MSc students felt these elements should be assessed.

Context

As a ‘taught’ component of the course, any assessment needs to be carefully planned and contribute “directly to learning and skill development” (UoR, 2023).  Student feedback indicated that the current method of assessing placements, which comprises a written report of what was done and learned on placement, was unsatisfactory.  For instance, students felt that it did not reflect the amount of work put in over the course of the placement and that the final grade was too reliant on one piece of written work.

It was felt that gaining insight into current students’ views would be helpful to inform future changes to the way(s) in which MSc placements might be assessed, making this process proportionate and more useful for students.

Implementation

In February 2023, a grant from the UoR T&L Initiatives Fund was awarded to address the question of how MSc placements should be assessed.

A focus group discussion was conducted in June 2023, with participants recruited from PCLS MSc students. The focus group lasted around 45 minutes.  In addition, a 1:1 interview was held in July 2023 with another individual who wanted to share their views on the subject, and this is included to add detail to the data obtained from the focus group.

To frame the focus group and interview, open-ended questions were developed to explore participants’ experiences, opinions, and thoughts regarding placement and its assessment. The facilitator (a member of staff within PCLS) was present to encourage a relaxed atmosphere and supplement prepared questions with prompts to gather participants’ views and pursue themes relevant to the research questions.  The following is a sample of the questions asked during the interview:

  • What are the important elements of an MSc placement to be assessed?
  • How do you think MSc placements should be assessed?

The focus group was audio-recorded and the facilitator also kept notes to help keep track of themes and provide a more holistic picture of the discussion (Kornbluh, 2023).  The students were also given a document on different types of assessment and an exemplar of how a placement might be assessed to act as ‘stimulus material’ to prompt detailed discussion of their views on assessment.

Impact

The findings of the discussions provided insight into how students think placements should be assessed.  In terms what students considered important to be assessed, several different themes emerged:

  1. Assessing what was learned

Students talked about the importance of assessing what was learned, as opposed to a more cursory assessment of the time or activities spent in placement; for instance if: “technically, you put in the work but you didn’t actually apply it to anything”.  They reflected on the different environments and services within which placements took place, such as some being online and others being conducted in-person, and the importance of asking students “to prove” that they have engaged with placement.  The importance of certain skills (e.g., teamworking, presentation skills) learned on placement was highlighted, and also how such skills relate to students’ futures.

  1. Reflecting on one’s own development

Several students commented on how they have developed over the course of placement, and how this could be included in the assessment.  For instance, one student suggested that assessments could cover “what skills are we learning and how much are we able to apply it… and how we’re changing”.  Another noted discussions they have with their supervisors, whereby they “don’t just talk about what I do… [but] also some sort of reflection,” and that this brings in “reflection of how you see yourself”.

  1. Capturing diversity of experiences

The discussion also covered the reality that students will have different experiences of placement and how it can be “a very subjective experience,” including different types and levels of supervision.  For instance, one student commented that “the difference between person to person doesn’t always end in […] what they’re doing but also where they started from, because we also came into the programme with very different experiences”.  Students also highlighted differences in effort put in by those on placement, sharing the perception that there were some students “who are doing everything they possibly can” and others who “slowly move to the back… waiting for things to be handed to them”.

As part of the project, students also discussed how these skills and elements of placement should be assessed and, again, several themes arose:

  1. Continuous assessment

Students discussed having the opportunity to reflect ‘as they go’ and potential problems with a unitary, retrospective assessment.  Whilst they felt that having a reflective piece is “a nice idea,” one student commented how a lot of experiences gained on placement are difficult to recall at the time of submission.  They were also wary of having too much overlap between pieces of assessment, such as a reflective report and report of activities, and one student suggested being “forced to keep track of what you have been doing… in a detailed manner”.

Having been offered a list of potential assessment types to review in the focus group, one student felt that Reflective Diaries could be a better approach, perhaps used alongside an hours log.  Another suggested that Learning Logs with “certain points to learn about” could be helpful, perhaps covering “small reports on small things”.  Another suggested a “spaced out diary… or some form of input from our supervisor” could be of use, although also stated that they were unsure “how feasible that would be”.  It was also suggested that a website (or blog) could be used to help students log experiences and remain accountable.  Of note, some students chose to do this independently, with one saying: “I keep a log for myself”.

  1. Oral presentations

Many students mentioned advantages of an oral presentation over written work, including being “better able to express what I’m doing when I speak”.  Another commented that “when you write, you downplay” what was done on placement and that an oral form of assessment can be less constrained by “academic rules”.  Another student agreed, saying that a presentation would “let someone express [their experience] much better” and another concluded: “I think just talking would be better [than a written assignment]”.

Students suggested that oral presentations offer a chance to “talk through your experience” and also to field questions (e.g., “What do you think you specifically learned?”), which “makes you reflect a lot more”.  They also commented on the advantages of having other individuals present.  A student noted that presenting in a group means that you “get to see what other people have been doing [and] how they’ve developed their skills” which could even “change your perspective”.  It was commented that this approach can be “helpful to your peers as well, not just you”.

In a similar vein, one student suggested a viva voce (a one-to-one oral examination) whereby students “talk to our supervisors… and have that discussion” about their experiences.

  1. Assessing the thoroughness of the experience

One student suggested that having written assignments can limit introspection, and get one “writing it for the sake of having a reflective piece to submit” rather than discussing “how much have I grown”.  By contrast, they suggested that, in oral presentations, “flow is better – easier – and it really gives you cause to think about how you have developed”.  Further reflecting on oral presentations, one student commented that “it’s up to you how you present it and how you convey how much you’ve learned, what you’ve learned, how much you’ve grown” and “how you justify what you’ve done in your placement hours”.

Reflections

The insight gained from this work has proved invaluable when formulating assessment for the coming academic year.  Students’ views on the possibility of interpersonal assessment has informed the structure of oral presentations where students are given the opportunity to discuss an aspect of placement in front of their peers.  The marking criteria have been developed to incorporate some of this feedback, such as inclusion of autonomy, personal development, and showing relevant skills.

Whilst it only represents a small study, some practical suggestions could be proposed.  For instance, when evidencing and discussing their placement experiences, students were clear that oral presentation offers several advantages over written methods (a more common approach to work-based assessment; Ferns & Moore, 2012).  The importance of assessing skills development over time was highlighted, which could be considered when setting and providing structure for both formal and informal assessment (e.g., Bates et al., 2013).  Finally, it is perhaps also important for educators to keep in mind that students begin placement with different experiences, variation which has the potential to impact both their learning and achievement.

Follow up

The summer of 2024 will be the first-time oral presentations have run for several ‘placement’ modules.  We shall continue to refine the assessment itself (and marking criteria) based on further feedback and look into whether concerns about the written reflective piece remain; if so, an assessment that relies more on continuous engagement could be considered.

References

  • Bates, J. et al. (2013).  Student perceptions of assessment and feedback in longitudinal integrated clerkships.  Medical Education, 47, 362–374. https://doi.org/10.1111/medu.12087
  • Ferns, S., & Moore, K. (2012).  Assessing student outcomes in fieldwork placements: An overview of current practice.  Asia-Pacific Journal of Cooperative Education, 13(4), 207–224.
  • Kornbluh, M. (2023).  Facilitation strategies for conducting focus groups attending to issues of power.  Qualitative Research in Psychology, 20, 1–20. https://doi.org/10.1080/14780887.2022.2066036
  • University of Reading. (2023, December).  Assessment and the Curriculum Frameworkhttps://sites.reading.ac.uk/curriculum-framework/assessment/

Involving students in the appraisal of rubrics for performance-based assessment in Foreign Languages By Dott. Rita Balestrini

Context

In 2016, in the Department of Modern Languages and European Studies (DMLES), it was decided that the marking schemes used to assess writing and speaking skills needed to be revised and standardised in order to ensure transparency and consistency of evaluation across different languages and levels. A number of colleagues teaching language modules had a preliminary meeting to discuss what changes had to be made, what criteria to include in the new rubrics and whether the new marking schemes would apply to all levels. While addressing these questions, I developed a project with the support of the Teaching and Learning Development Fund. The project, now in its final stage, aims to enhance the process of assessing writing and speaking skills across the languages taught in the department. It intends to make assessment more transparent, understandable and useful for students; foster their active participation in the process; and increase their uptake of feedback.

The first stage of the project involved:

  • a literature review on the use of standard-based assessment, assessment rubrics and exemplars in higher education;
  • the organization of three focus groups, one for each year of study;
  • the development of a questionnaire, in collaboration with three students, based on the initial findings from the focus groups;
  • the collection of exemplars of written and oral work to be piloted for one Beginners language module.

I had a few opportunities to disseminate some key ideas emerged from the literature review – School of Literature and Languages’ assessment and feedback away day, CQSD showcase and autumn meeting of the Language Teaching Community of Practice. Having only touched upon the focus groups at the CQSD showcase, I will describe here how they were organised, run and analysed and will summarise some of the insights gained.

Organising and running the focus groups

Focus groups are a method of qualitative research that has become increasingly popular and is often used to inform policies and improve the provision of services. However, the data generated by a focus group are not generalisable to a population group as a whole (Barbour, 2007; Howitt, 2016).

After attending the People Development session on ‘Conducting Focus groups’, I realised that the logistics of their organization, the transcription of the discussion and the analysis of the data they generate require a considerable amount of time and detailed planning . Nonetheless, I decided to use them to gain insights into students’ perspectives on the assessment process and into their understanding of marking criteria.

The recruitment of participants was not a quick task. It involved sending several emails to students studying at least one language in the department and visiting classrooms to advertise the project. In the end, I managed to recruit twenty-two volunteers: eight for Part I, six for Part II and eight for Part III. I obtained their consent to record the discussions and use the data generated by the analysis. As a ‘thank you’ for participating, students received a £10 Amazon voucher.

Each focus group lasted one hour, the discussions were entirely recorded and were based on the same topic guide and stimulus material. To open discussion, I used visual stimuli and asked the following question:

  • In your opinion, what is the aim of assessment?

In all three groups, this triggered some initial interaction directly with me. I then started picking up on differences between participants’ perspectives, asking for clarification and using their insights. Slowly, a relaxed and non-threatening atmosphere developed and led to more spontaneous and natural group conversation, which followed different dynamics in each group. I then began to draw on some core questions I had prepared to elicit students’ perspectives. During each session, I took notes on turn-taking and some relevant contextual clues.

I ended all the three focus group sessions by asking participants to carry out a task in groups of 3 or 4. I gave each group a copy of the marking criteria currently used in the department and one empty grid reproducing the structure of the marking schemes. I asked them the following question:

  • If you were given the chance to generate your own marking criteria, what aspects of writing/speaking /translating would you add or eliminate?

I then invited them to discuss their views and use the empty grid to write down the main ideas shared by the members of their group. The most desired criteria were effort, commitment, and participation.

Transcribing and analysing the focus groups’ discussions

Focus groups, as a qualitative method, are not tied to any specific analytical framework, but qualitative researchers warn us not to take the discourse data at face value (Barbour, 2007:21). Bearing this in mind, I transcribed the recorded discussions and chose discourse analysis as an analytical framework to identify the discursive patterns emerging from students’ spoken interactions.

The focus of the analysis was more on ‘words’ and ‘ideas’ rather than on the process of interaction. I read and listened to the discussions many times and, as I identified recurrent themes, I started coding some excerpts. I then moved back and forth between the coding frame and the transcripts, adding or removing themes, renaming them, reallocating excerpts to different ‘themes’.

Spoken discourse lends itself to multiple levels of analysis, but since my focus was on students’ perspectives on the assessment process and their understanding of marking criteria, I concentrated on those themes that seemed to offer more insights into these specific aspects. Relating one theme to the other helped me to shed new light on some familiar issues and to reflect on them in a new way.

Some insights into students’ perspectives

As language learners, students gain personal experience of the complexity of language and language learning, but the analysis suggests that they draw on the theme of complexity to articulate their unease with the atomistic approach to evaluation of rubrics and, at times, also to contest the descriptors of the standard for a first level class. This made me reflect about whether the achievement of almost native-like abilities is actually the standard against which we want to base our evaluation. Larsen-Freeman’s (2015) and Kramsch’s (2008) approach to language development as a ‘complex system’ helped me to shed light on the idea of ‘complexity’ and ‘non-linear relations’ in the context of language learning which emerged from the analysis.

The second theme I identified is the ambiguity and vagueness of the standards for each criterion. Students draw on this theme not so much to communicate their lack of understanding of the marking scheme, but to question the reliability of a process of evaluation that matches performances to numerical values by using opaque descriptors.

The third theme that runs through the discussions is the tension between the promise of objectivity of the marking schemes and the fact that their use inevitably implies an element of subjectivity. There is also a tension between the desire for an objective counting of errors and the feeling that ‘errors’ need to be ‘weighted’ in relation to a specific learning context and an individual learning path. On one hand, there is the unpredictable and infinite variety of complex performances that cannot easily be broken down into parts in order to be evaluated objectively, on the other hand, there is the expectation that the sum of the parts, when adequately mapped to clear marking schemes, results in an objective mark.

Rubrics in general seem to be part of a double discourse. They are described as unreliable, discouraging and disheartening as an instructional tool. The feedback they provide is seen as having no effect on language development as does the complex and personalised feedback that teachers provide. Effective and engaging feedback is always associated with the expert knowledge of a teacher, not with rubrics. However, the need for rubrics as a tool of evaluation is not questioned in itself.

The idea of using exemplars to pin down standards and make the process of evaluation more objective emerges from the Part III focus group discussion. Students considered pros and cons of using exemplars drawing on the same rationales that can be found debated in scholarly articles. Listening to, and reading systematically through, students’ discourses was quite revealing and brought to light some questionable views on language and language assessment that most marking schemes measuring achievement in foreign languages contribute to promote.

Conclusion

The insights into students’ perspectives gained from the analysis of the focus groups suggest that rubrics can easily create false expectations in students and foster an assessment ‘culture’ based on an idea of learning as steady increase in skills. We need to ask ourselves how we could design marking schemes that communicate a more realistic view of language development. Could we create marking schemes that students do not find disheartening or ineffective in understanding how to progress? Rather than just evaluation tools, rubrics should be learning tools that describe different levels of performance and avoid evaluative language.

However, the issues of ‘transparency’ and ‘reliability’ cannot be solved by designing clearer, more detailed or student-friendly rubrics. These issues can only be addressed by sharing our expert knowledge of ‘criteria’ and ‘standards’ with students, which can be achieved through dialogue, practice, observation and imitation. Engaging students in marking exercises and involving them in the construction of marking schemes – for example by asking them how they would measure commonly desired criteria like effort and commitment – offers us a way forward.

References:

Barbour, R. 2007. Doing focus groups. London: Sage.

Howitt, D. 2016. Qualitative Research Methods in Psychology. Harlow: Pearson.

Kramsch, C. 2008. Ecological perspectives on foreign language education. Language Teaching 41 (3): 389-408.

Larsen-Freeman, D. 2015. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Potter, M. and M. Wetherell. 1987. Discourse and social psychology. Beyond attitudes and behaviours. London: Sage.

 

Links to related posts

‘How did I do?’ Finding new ways to describe the standards of foreign language performance. A follow-up project on the redesign of two marking schemes (DLC)

Working in partnership with our lecturers to redesign language marking schemes 

Sharing the ‘secrets’: Involving students in the use (and design?) of marking schemes