Using student feedback to make university-directed learning on placement more engaging

Anjali Mehta Chandar: a.m.chandar@reading.ac.uk

Charlie Waller Institute, School of Psychology and Clinical Language Sciences

 

Overview

Our vocational postgraduate courses in Cognitive Behavioural Therapy include University Directed Learning (UDL) days that are completed within the placement setting (e.g. their NHS trust). A qualitative student feedback survey allowed us to collaboratively adapt this format, with favourable outcomes in how interesting, enjoyable and useful the students found the day.

Objectives

Our objectives were as follows:

-To ascertain how interesting, enjoyable and useful the UDL days were, as perceived by the students, based on pedagogical findings that students engage best and are most satisfied, if these characteristics are met (e.g. Ramsden, 2003).

-To make improvements to the UDL days based on qualitative student feedback.

-To ascertain whether our improvements had made the UDL days more interesting, enjoyable and useful, as perceived by the next cohort of students.

Context

The Educational Mental Health Practitioner (EMHP) and Children’s Wellbeing Practitioner (CWP) programmes are one-year vocational postgraduate courses. The students are employed by an NHS trust, local authority or charity, and study at UoR to become qualified mental health practitioners.

UDL days make up a small proportion of the teaching days. They are self-guided teaching days, usually containing elements of e-learning, designed to complement and consolidate face to face teaching (live or remote). A combination of learning methods, including e-learning, is shown to be effective in increasing clinical skills (e.g. Sheikhaboumasoudi et al., 2018).

UDL days had been poorly received by our two 2019/2020 cohorts, according to feedback in the student rep meetings and Mentimeter feedback after each UDL e.g.  comments included: ‘there was too much [content] for one day’, ‘I felt pressured to fill [the form] out rather than focussing on the readings themselves’ and ‘[the reflective form] was too long and too detailed’. Whilst this gave us some ideas on changes to make, I was aware of the low completion rates of the Mentimeter feedback. Therefore, to hear from more voices, we decided to create a specific feedback survey about the UDLs to help us make amendments in a collaborative way.

Implementation

We started by creating a survey for the current students to ascertain their views on how interesting, enjoyable and useful the UDL days were. We also had qualitative questions regarding what they liked and disliked and ideas for specific improvements.

I then led a meeting with my course team to explore the key findings. We agreed to make several changes based on the specific feedback, such as:

– variety of activities (not purely e-learning, but roleplays, videos, self-practice self-reflection tasks, group seminars run by lecturers, etc, to provide a more engaging day)
– fewer activities (we aimed for one main activity for every 1-1.5 hours to manage workload demands)
– an option to complete the programme’s reflective form (designed to be more simple, by asking them to provide their own notes on each task) or provide their notes in a format of their choice (e.g. mindmaps, vlogs, etc) to increase accessibility.
– share these reflections on a discussion board for other students and the lecturer to comment on.

We were unable to implement these changes to the current cohort as they had finished all their UDL days in the timetable, so made the changes for the following cohorts in 2020/2021.

We then sought their feedback via a new survey to ascertain their views on how interesting, enjoyable and useful the UDLs are, with additional questions relating to specific feedback on the new elements.

Impact

The survey results for the newer cohorts were much more positive than the original cohort, after changes were made to the UDL format.

There was a significant increase in how interesting, enjoyable and useful the students found the days.

The trainees also largely agreed that the UDLs had an appropriate workload, e.g. one task per 1-1.5 hours.

They also largely agreed that UDLs included interactive and varied tasks. This finding is in contrast to some of the aforementioned literature of the importance of e-learning, and it must be remembered that too much e-learning can be less engaging for trainees.

The students also praised the simple reflective form as a helpful tool, and many appreciated the option to submit notes in their own preferred way.

Although we neglected to explore the role of the lecturer feedback in the new UDL survey, research shows that this makes for a more engaging e-learning session (Dixson, 2010), and may explain why the UDLs were now more favourable.

Moreover, the process of collecting data from the students via a feedback form seemed effective, in that we used feedback to adapt the specific teaching method, thus improving student satisfaction. Pedagogical research shows the importance of using qualitative questions (instead of, or as well as, quantitative methods) to elicit student feedback (Steyn et al., 2019).

Reflection

Overall, this redesign was successful, which may be down to the fact we used the student voice to make meaningful changes. This is in line with Floden’s (2017) research that student feedback can help to improve courses.

Furthermore, the changes we have made are in line with effective practice amongst other courses and universities, e.g. appropriate workload (Ginn et al., 2007), student choice of discussion format (Lin & Overbaugh, 2007), accessibility of resources (Mahmood et al., 2012) and lecturer interaction (Dixson, 2010).

There is a possible limitation in this case study, in that our more recent cohorts are generally happier on the course, and therefore may be more positive about the UDL. In future projects, it would be useful if we can notice themes within module evaluation/student rep meetings earlier, to then elicit specific survey feedback earlier in the course and make amendments sooner, allowing feedback from the same cohort.

In future variations of the survey, I would also wish to explicitly ask how trainees find sharing reflections on the Blackboard discussion groups, as this is one change we had not elicited feedback on.

Follow Ups

We have continued to utilise these changes in the UDL format with future cohorts,  e.g. reduced workload, variety of activities, simplified forms, choice of discussion format and lecturer interaction. We no longer receive concerns about these days in the student rep meetings since the original cohort. The Mentimeter feedback at the end of each UDL is generally positive, with one person recently commenting: ‘this was a very engaging day’.

References

References:

Dixson, M. D. (2010). Creating effective student engagement in online courses: What do students find engaging?. Journal of the Scholarship of Teaching and Learning, 1-13.

Flodén, J. (2017). The impact of student feedback on teaching in higher education. Assessment & Evaluation in Higher Education42(7), 1054-1068.

Ginns, P., Prosser, M., & Barrie, S. (2007). Students’ perceptions of teaching quality in higher education: The perspective of currently enrolled students. Studies in higher education32(5), 603-615.

Lin, S. Y., & Overbaugh, R. C. (2007). The effect of student choice of online discussion format on tiered achievement and student satisfaction. Journal of Research on technology in Education39(4), 399-415.

Mahmood, A., Mahmood, S. T., & Malik, A. B. (2012). A comparative study of student satisfaction level in distance learning and live classroom at higher education level. Turkish Online Journal of Distance Education13(1), 128-136.

Ramsden, P. (2003). Learning to teach in higher education. Routledge.

Sheikhaboumasoudi, R., Bagheri, M., Hosseini, S. A., Ashouri, E., & Elahi, N. (2018). Improving nursing students’ learning outcomes in fundamentals of nursing course through combination of traditional and e-learning methods. Iranian journal of nursing and midwifery research, 23(3), 217.

Steyn, C., Davies, C., & Sambo, A. (2019). Eliciting student feedback for course development: the application of a qualitative course evaluation tool among business research students. Assessment & Evaluation in Higher Education, 44(1), 11-24.

Links

CWI website: https://sites.reading.ac.uk/charlie-waller-institute/

Connecting with the Curriculum Framework: Using focus groups to diversify assessment (Part 2)

Dr Madeleine Davies and Michael Lyons, School of Literature and Languages

Overview

The Department of English Literature (DEL) has run two student focus groups and two whole-cohort surveys as part of our Teaching and Learning Development Fund‘Diversifying Assessments’ project. This is the second of two T&L Exchange entries on this topic. Click here for the first entry which outlines how the feedback received from students indicates that their module selection is informed by the assessment models that are used by individual modules. Underpinning these decisions is an attempt to avoid the ‘stress and anxiety’ that students connect with exams. The surprise of this second round of focus groups and surveys is the extent to which this appears to dominate students’ teaching and learning choices.

Objectives

  • The focus groups and surveys are used to gain feedback from DEL students about possible alternative forms of summative assessment to our standard assessed essay + exam model. This connects with the Curriculum Framework in its emphasis on Programme Review and also with the aims of the Assessment Project.
  • These forms of conversations are designed to discover student views on the problems with existing assessment patterns and methods, as well as their reasons for preferring alternatives to them.
  • The conversations are also being used to explore the extent to which electronic methods of assessment can address identified assessment problems.

Context

Having used focus groups and surveys to provide initial qualitative data on our assessment practices, we noticed a widespread preference for alternatives to traditional exams (particularly the Learning Journal), and decided to investigate the reasons for this further. The second focus group and subsequent survey sought to identify why the Learning Journal in particular is so favoured by students, and we were keen to explore whether teaching and learning aims were perceived by students to be better achieved via this method than by the traditional exam. We also took the opportunity to ask students what they value most in feedback: the first focus group and survey had touched on this but we decided this time to give students the opportunity to select four elements of feedback which they could rank in order or priority. This produced more nuanced data.

Implementation

  • A second focus group was convened to gather more detailed views on the negative attitudes towards exams, and to debate alternatives to this traditional assessment method.
  • A series of questions was asked to generate data and dialogue.
  • A Survey Monkey was circulated to all DEL students with the same series of questions as those used for the focus group in order to determine whether the focus group’s responses were representative of the wider cohort.
  •  The Survey Monkey results are presented below. The numbers refer to student responses to a category (eg. graphic 1, 50 students selected option (b). Graphic 2 and graphic 5 allowed students to rank their responses in order or priority.

Results

  • Whilst only 17% in the focus group preferred to keep to the traditional exam + assessed essay method, the survey found the aversion to exams to be more prominent. 88% of students preferred the Learning Journal over the exam, and 88% cited the likelihood of reducing stress and anxiety as a reason for this preference.
  • Furthermore, none of the survey respondents wanted to retain the traditional exam + assessed essay method, and 52% were in favour of a three-way split between types of assessment; this reflects a desire for significant diversity in assessment methods.
  • We find it helpful to know precisely what students want in terms of feedback: ‘a clear indication of errors and potential solutions’ was the overwhelming response. ‘Feedback that intersects with the Module Rubric’ was the second highest scorer (presumably a connection between the two was identified by students).
  • The students in the focus group mentioned a desire to choose assessment methods within modules on an individual basis. This may be one issue in which student choice and pedagogy may not be entirely compatible (see below).
  • Assessed Essay method: the results seem to indicate that replacing an exam with a second assessed essay is favoured across the Programme rather than being pinned to one Part.

Reflections

The results in the ‘Feedback’ sections are valuable for DEL: they indicate that clarity, diagnosis, and solutions-focused comments are key. In addressing our feedback conventions and practices, this input will help us to reflect on what we are doing when we give students feedback on their work.

The results of the focus group and of the subsequent survey do, however, raise some concerns about the potential conflict between ‘student choice’ and pedagogical practice. Students indicate that they not only want to avoid exams because of ‘stress’, but that they would also like to be able to select assessment methods within modules. This poses problems because marks are in part produced ‘against’ the rest of the batch: if the ‘base-line’ is removed by allowing students to choose assessment models, we would lack one of the main indicators of level.

In addition, the aims of some modules are best measured using exams. Convenors need to consider whether a student’s work can be assessed in non-exam formats but, if an exam is the best test of teaching and learning, it should be retained, regardless of student choice.

If, however, students overwhelmingly choose non-exam-based modules, this would leave modules retaining an exam in a vulnerable position. The aim of this project is to find ways to diversify our assessments, but this could leave modules that retain traditional assessment patterns vulnerable to students deselecting them. This may have implications for benchmarking.

It may also be the case that the attempt to avoid ‘stress’ is not necessarily in students’ best interests. The workplace is not a stress-free zone and it is part of the university’s mission to produce resilient, employable graduates. Removing all ‘stress’ triggers may not be the best way to achieve this.

Follow up

  • DEL will convene a third focus group meeting in the Spring Term.
  • The co-leaders of the ‘Diversifying Assessments’ project will present the findings of the focus groups and surveys to DEL in a presentation. We will outline the results of our work and call on colleagues to reflect on the assessment models used on their modules with a view to volunteering to adopt different models if they think this appropriate to the teaching and learning aims of their modules
  • This should produce an overall assessment landscape that corresponds to students’ request for ‘three-way’ (at least) diversification of assessment.
  • The new landscape will be presented to the third focus group for final feedback.

Links

With thanks to Lauren McCann of TEL for sending me the first link which includes a summary of students’ responses to various types of ‘new’ assessment formats.

https://www.facultyfocus.com/articles/online-education/assessment-strategies-students-prefer/

Conclusions (May 2018)

The ‘Diversifying Assessment in DEL’ TLDF Mini-Project revealed several compelling reasons for reflecting upon assessment practice within a traditional Humanities discipline (English Literature):

  1. Diversified cohort: HEIs are recruiting students from a wide variety of socio-cultural, economic and educational backgrounds and assessment practice needs to accommodate this newly diversified cohort.
  2. Employability: DEL students have always acquired advanced skills in formal essay-writing but graduates need to be flexible in terms of their writing competencies. Diversifying assessment to include formats involving blog-writing, report-writing, presentation preparation, persuasive writing, and creative writing produces agile students who are comfortable working within a variety of communication formats.
  3. Module specific attainment: the assessment conventions in DEL, particularly at Part 2, have a standardised assessment format (33% assessed essay and 67% exam). The ‘Diversifying Assessment’ project revealed the extent to which module leaders need to reflect on the intended learning outcomes of their modules and to design assessments that are best suited to the attainment of them.
  4. Feedback: the student focus groups convened for the ‘Diversifying Assessment’ project returned repeatedly to the issue of feedback. Conversations about feedback will continue in DEL, particularly in relation to discussions around the Curriculum Framework.
  5. Digitalisation: eSFG (via EMA) has increased the visibility of a variety of potential digital assessment formats (for example, Blackboard Learning Journals, Wikis and Blogs). This supports diversification of assessment and it also supports our students’ digital skills (essential for employability).
  6. Student satisfaction: while colleagues should not feel pressured by student choice (which is not always modelled on academic considerations), there is clearly a desire among our students for more varied methods of assessment. One Focus Group student argued that fees had changed the way students view exams: students’ significant financial investment in their degrees has caused exams to be considered unacceptably ‘high risk’. The project revealed the extent to which Schools need to reflect on the many differences made by the new fees landscape, most of which are invisible to us.
  7. Focus Groups: the Project demonstrated the value of convening student focus groups and of listening to students’ attitudes and responses.
  8. Impact: one Part 2 module has moved away from an exam and towards a Learning Journal as a result of the project and it is hoped that more Part 2 module convenors will similarly decide to reflect on their assessment formats. The DEL project will be rolled out School-wide in the next session to encourage further conversations about assessment, feedback and diversification. It is hoped that these actions will contribute to Curriculum Framework activity in DEL and that they will generate a more diversified assessment landscape in the School.

Feedback via audiofiles in the Department of English Literature – Professor Cindy Becker

Profile picture for Prof. Becker

Cindy Becker is the Director of Teaching and Learning for the School of Literature and Languages and also teaches in the Department of English Literature. She is a Senior Fellow of the Higher Education Academy and has been awarded a University of Reading Teaching Fellowship. She is an enthusiastic member of several University Communities of Practice: Placement Tutors, University Teaching Fellows, Technology Enhanced Learning, and Student Engagement Champions.

Cindy is a member of Senate and has sat on university steering committees and working parties; she is also a member of the Management Committee for the School of Literature and Languages and chair the School Board for Teaching and Learning. She is the convenor of Packaging Literature and Shakespeare on Film.

In September 2015 she started to trial the use of the audio feedback function within Turnitin’s online marking tool (GradeMark). This innovative approach did present some initial challenges but, overall, it proved to be a great success for both Cindy and her students.

OBJECTIVES

GradeMark was introduced to the University in the Summer of 2015. I wanted to use this new marking tool to explore different ways of providing feedback for students. In particular, I wanted to adopt a more personal approach and provide more in-depth feedback without significantly increasing the time I spend marking each essay.

CONTEXT

GradeMark allows you to produce typewritten feedback for assessment work and this is what most of us are used to. However, it will also let you click on an icon that allows you to create an audio file of up to three minutes of spoken feedback instead.

IMPLEMENTATION

I started off by making notes as I marked the essay and then talking through them on the audio file. It did not work very well because my feedback became stilted, took longer than three minutes and was time consuming to prepare. I think I lacked confidence at the outset.

Now I take a more relaxed approach. I make no more than a couple of notes (and often not even that) and then I simply press the record button. As I talk to the student I scroll down the assignments on the split screen and this is enough to jog my memory as to what I want to say. Taking a methodical approach has helped me. I always begin with an overview, then work on specific challenges or praiseworthy elements, then end with a brief comment summing up my thoughts. If it
goes wrong, I simply scrap the recording and begin again. I save myself time with the uploading by setting it to upload and then begin to work on the next assignments. This saves the frustration of staring at an upload symbol for ages when you want to get on with it.

IMPACT

It is worth the effort.

For now, students love it. I asked students to let me know whether they would prefer written or audio file feedback and those who responded voted for audio file. The novelty factor might wear off, but I think at the moment it is a useful way to engage students in our assessment criteria and module learning aims, in class and beyond.

For now, I love it. It is a pleasant change; it is quicker and fuller than written feedback. It seems to allow me to range more widely and be more personally responsive to students through their assignments. Because I am ‘talking to them’ I have found myself more ready to suggest other modules they might like, or some further reading that they might enjoy.

REFLECTIONS

It can take a few attempts to ensure that your headphones are working within the system. This is most usually a problem with GradeMark or Blackboard more generally – restarting Blackboard or even your computer will fix it. You might not have headphones already to hand, and that sounds like another investment of time and money, but it’s good idea to buy cheap headphones – they cost around £20 from a supermarket and are perfectly adequate for the job. You feel like a twit talking to your computer. Of course you do – who wouldn’t? After your first few audio files it will feel perfectly natural.

For the future, I can see it having an impact on assignment tutorials. I believe I can have an equal impact via a tutorial or a three minute audio file, and everyone actually listens to their audio file. I am going to have to decide what to do with the extra ‘spare’ contact time this might give
me…

Engaging students in assessment design

Dr Maria Kambouri-Danos, Institute of Education

m.kambouridanos@reading.ac.uk

Year of activity 2016/17

Overview

This entry aims to share the experience of re-designing and evaluating assessment in collaboration with students. It explains the need for developing the new assessment design and then discusses the process of implementing and evaluating its appropriateness. It finally reflects on the impact of MCQ tests, when assessing students in higher education (HE), and the importance of engaging students as partners in the development of new assessment tools.

Objectives

  • To re-design assessment and remove a high-stakes assessment element.
  • To proactively engage ‘students as partners’ in the development and evaluation of the new assessment tool.
  • To identify the appropriateness of the new design and its impact on both students and staff.

Context

Child Development (ED3FCD) is the core module for the BA in Children’s Development and Learning (BACDL), meaning that a pass grade must be achieved on the first submission to gain a BA Honours degree classification (failing leads to an ordinary degree). The assessment needed to be redesigned as it put the total weight of students’ mark on one essay. As the programme director, I wanted to engage the students in the re-design process and evaluate the impact of the new design on both students and staff.

Implementation

After attending a session on ‘Effective Feedback: Ensuring Assessment and Feedback works for both Students and Staff Across a Programme’ I decided to explore more the idea of using Multiple Choice Tests (MCQ). To do so, I attended a session on ‘Team Based Learning (TBL)’ and another on ‘MCQ: More than just a Test of Information Recall’, to gather targeted knowledge about designing effective MCQ questions.

I realised that MCQ tests can help access students’ understanding and knowledge and also stimulate students’ active and self-managed learning. Guided by the idea of ‘assessment for learning’, I proposed the use of an MCQ test during a steering group meeting (employees and alumni) and a Board of Studies (BoS) meeting, which 2nd year Foundation Degree as well as BACDL student representatives attended. The idea was resisted initially, as MCQ tests are not traditionally used in HE education departments. However, after exploring different options and highlighting the advantages of MCQ tests, the agreement was unanimous. At the last BoS meeting (2016), students and staff finalised the proposal for the new design, proposing to use the MCQ test for 20% of the overall mark, keeping the essay for the remaining 80%.

At the beginning of 2017, I invited all BACDL students to anonymously post their thoughts and concerns about the new design (and the MCQ test) on Padlet. Based on these comments, I then worked closely with the programme’s student representatives and had regular meetings to discuss, plan and finalise the assessment design. We decided how to calculate the final mark (as the test was completed individually and then in a group) as well as the total number of questions, the duration of the test, etc.  A pilot study was then conducted during which a sample MCQ test was shared with all the students, asking them to practise and then provide feedback. This helped to decide the style of the questions used for the final test, an example of which is given below:

There are now more than one million learners in UK schools who speak English as an additional language (EAL). This represents a considerable proportion of the school population, well above 15 per cent. To help EAL children develop their English, teachers should do all the following, except…

a. use more pictures and photographs to help children make sense of new information.

b. use drama and role play to make learning memorable and encourage empathy.

c. maintain and develop the child’s first language alongside improving their English.

d. get children to work individually because getting them into groups will confuse them and make them feel bad for not understanding.

e. provide opportunities to talk before writing and use drills to help children memorise new language.

Impact

Students were highly engaged in the process of developing the new design, and the staff-student collaboration encouraged the development of bonds within the group. The students were excited with the opportunity to actively develop their own course and the experience empowered them to take ownership of their own learning. All of them agreed that they felt important and as a student representative said, “their voices were heard”.

The new design encouraged students to take the time to gauge what they already know and identify their strengths and weaknesses. Students themselves noted that the MCQ test helped them to develop their learning as it was an additional study opportunity. One of them commented that “…writing notes was a good preparation for the exam. The examination was a good learning experience.” Staff also agreed that the test enabled students to (re)evaluate their own performance and enhance their learning. One of the team members noted that the “…test was highly appropriate for the module as it offered an opportunity for students to demonstrate their proficiency against all of the learning outcomes”.

Reflections

The new assessment design was implemented successfully because listening to the students’ voice and responding to their feedback was an essential part of the designing process. Providing opportunities to both students and staff to offer their views and opinions and clearly recognising and responding to their needs were essential, as these measures empowered them and helped them to take ownership of their learning.

The BACDL experience suggests that MCQ tests can be adapted and used for different subject areas as well as to measure a great variety of educational objectives. Their flexibility means that they can be used for different levels of study or learning outcomes, from simple recall of knowledge to more complex levels, such as the student’s ability to analyse phenomena or apply principles to new situations.

However, good MCQ tests take time to develop. It is hoped that next year the process of developing the test will be less time-consuming as we already have a bank of questions that we could use. This will enable randomisation of questions which will also help to avoid misconduct. We are also investigating options that would allow for the test to be administered online, meaning that feedback could be offered immediately, reducing even further the time/effort required to mark the test.

Follow up

MCQ tests are not a panacea; just like any other type of assessment tool, MCQ tests have advantages and limitations. This project has confirmed that MCQ tests are adaptable and can be used for different subject areas as well as to measure a great variety of educational objectives. The evaluation of the assessment design will continue next year and further feedback will be collected by the cohort and next year’s student representatives.