Addressing the perception gap: Helping international students to succeed in the lecture-only Macroeconomics 2 course
Author: Jan Čapek, Masaryk University
Keywords:
continuous learning, internationalising the curriculum, (peer) assessment, economics
Summary:
The need for internationalisation in Macroeconomics 2 is related to earlier students’ statements that there were not enough links between the course and the realities of their worlds/lives. In addition, many students were not satisfied with the course completion requirements. An innovation was implemented in the course to address these challenges: first, to internationalise the course curriculum, students were required to write a paper that combined their knowledge of their home country with the course material. Second, new and more varied assessment methods were introduced to encourage continuous learning. Using mixed methods of research, the chapter reveals that internationalisation and stress on continuous preparation did lead to higher level learning, but a small subset of students who resisted the changes failed to benefit from them. Similarly, many students appreciated the opportunity of the semester-long writing project to integrate the newly acquired knowledge with what they knew about the economies of their home countries, but quite a few of them avoided investing time and effort into the paper, and thus, their learning suffered. The chapter concludes by suggesting ways to encourage better learning outcomes, such as including peer learning and co-opting students to the new approach to assessment.
As Hussein and Schiffelbein (2020) show, international students face many difficulties when coming to study from different academic, cultural, and linguistic backgrounds. Although the difficulties are likely curriculum-wide (or even broader), this chapter focuses on internationalisation in a single course. The need for internationalisation in Macroeconomics 2 arose from students’ feeling that there were few links between the course and the realities of their worlds/lives. In addition, many students were not satisfied with the course completion requirements, either.
Two innovations were implemented in the course in order to address the above challenges. One made explicit use of students’ existing knowledge (Cosh 2000) by combining the course material with information about their home countries. The other ensured that assessment aided learning (Brown 2004) by introducing multiple and varied assessment methods. The chapter investigates if these changes improved student learning and satisfaction with the assessment methods. Although the teaching and learning literature predicts that the implemented changes would impact student learning and satisfaction positively, the analysis presented in this chapter uncovered a more complex situation. While the innovation did not result in an improvement in overall student performance compared to earlier years, students who studied continuously during the semester did perform better than their peers. The concluding section of the chapter considers the potential reasons for this outcome and recommends areas for future improvement.
Over the years of teaching, two teaching challenges linked to internationalisation emerged. First, not all students identified with the course completion requirements. They expressed dissatisfaction that the course concluded only with a written exam in the form of a test. Students’ reactions indicated at least two reasons for the dissatisfaction. First, since there were no complementary completion requirements, everything depended on one test, which was stressful. Additionally, since there was no required work during the semester, some students deferred learning until the end of the semester, resulting in an unmanageable amount of study during the examination period.
Second, students felt that there were few macroeconomics links to the realities of their worlds/lives. Although the textbook contains many examples, they primarily focus on the US or are cast in a broader international context. The course evaluation forms and discussions with students indicated that (some) students did not see connections between Macroeconomics 2 and their life or their perspective of their world.
Both teaching challenges suggested the need for, in the words of Hussein and Schiffelbein (2020: 66), ‘addressing the perception gap’. Students expected something different based on their earlier experiences and also found the topics and examples that the course covered distant from their realities. This frustrated them because ‘international students are often stranded as they seek to earn high grades but lack the necessary academic, cultural, and linguistic skills to be successful’ (Hussein and Schiffelbein 2020: 66).
To address students’ issues formulated in the first teaching challenge, the revised completion requirements added two new forms of assessment: short weekly online quizzes and a project on a macroeconomic issue in the student’s country. In the case of the weekly quizzes, students received summative feedback, in the form of points received for correct answers, and formative written feedback summarising and explaining the most problematic aspects of each quiz. The project featured two submissions with feedback: formative feedback for the first draft, and summative feedback – a grade – given after the final draft by the teaching assistant. In the new grading scheme, the final exam test made up only 50% of the grade, another 30% corresponded to the weekly online quizzes, and the final 20% was awarded for the project.
According to Wiliam and Thompson (2007), establishing where the learners are in their learning and what needs to be done to get them to achieve the course’s learning goals are key processes in learning and teaching. Consequently, formative assessment can be conceptualised as providing feedback that moves learners forward, such as the comments the students received as their first feedback on their draft projects in Macroeconomics 2. Moreover, even the summative feedback for the weekly quizzes can be understood as formative assessment, that is, a formative use of summative tests: students learn about their knowledge during the semester, which allows them to adjust their study techniques to achieve better grades (e.g., Wiliam 2000). Hence, I expected that introducing more complementary completion requirements during the semester would increase students’ continuous work and effort and ultimately increase the course success rate, final exam scores, and grade average (H1).
To address the second teaching challenge, the innovated course offered learning tools that guided students to consider macroeconomic issues in their countries. Knight (2003: 2) defines internationalisation as ‘the process of integrating an international, intercultural or global dimension into the purpose, functions or delivery of post-secondary education’. Most often, as it is also the case in many other chapters of this volume, it is understood in practical terms as bringing international students to campus, mixing international and home students, or expanding the education of home students from the domestic perspective by adding foreign and/or international perspectives.
In Macroeconomics 2, international students were taught in isolation from home – that is, Czech – students, a condition which I had no authority to change, and learned with the help of an American textbook. This meant that it was a single foreign country’s perspective – rather than that of the home country – that needed to be augmented with the experience and knowledge of students about their home country. Thus, international students had the opportunity to reflect on the theory being presented in class in the context of the economies they lived in by doing a short project on a macroeconomic issue in their home countries, expanding the American/international focus of the textbook. Additionally, the newly added online weekly quizzes featured open-ended questions on four occasions during the semester addressing the student’s home country and linked to the topic covered in class. With 12 weeks per semester, this corresponds to roughly one open question every three weeks. The instructions explained that the response should be brief and apposite, limited to 500 characters. Therefore, I expected that introducing cases from students’ countries would improve the overall satisfaction with the course and identification with the course completion requirements (H2).
Macroeconomics 2 is taught at the Faculty of Economics and Administration at Masaryk University as part of the Finance and Business Management Master’s programmes. The course is always taught in the spring semester and the innovation detailed below was implemented in spring 2021. Macroeconomics 2 is a foundational theoretical course, which is obligatory for all Master’s students enrolled in the respective programmes. Students typically take the course in their first or second semester. The course is part of a self-financed, English-language programme offered to international students only. There were 44 students enrolled, mostly from African and Asian countries.
Macroeconomics 2 follows an international world-leading textbook (Mankiw 2019). The lecture slides are only slightly adapted versions of presentations supplied as instructor resources to the textbook. The course is regularly taught face to face, but needed to switch to online form due to COVID-19-related restrictions. A teaching assistant helped with providing feedback to students including holding consultations for the yearly project and its subsequent assessment. The course is scheduled for 12 or 13 weeks of teaching and, according to the study and examination regulations, the course must be completed with an exam and a grade. The course’s original (pre-innovation) structure featured only one completion requirement, which was a written exam in the form of a multiple-choice test.
After the teaching was concluded, the teaching assistant collected two kinds of qualitative data. The first was the teaching assistant’s feedback on student projects. The second type of data came from focus group interviews that the teaching assistant conducted about the applicability and usefulness of the knowledge gained during the course, the contribution of the knowledge to the overall understanding of the macroeconomic environment in the students’ home countries, and the composition of the total grade. Twenty-nine students took part in one of the six three-to-six-member focus groups. Due to COVID-19 restrictions, the focus group interviews were conducted online via MS Teams and, on average, they lasted for 26 minutes. In addition, qualitative information given in response to an open-ended question on the end-of-semester course evaluation form by ten students was used to enrich the evaluation of the innovation.
After the course ended, several types of quantitative data were collected. First, I looked at student performance measures in four different ways. (1) The results of the 14 weekly quizzes were utilised: there were ten multiple choice quizzes, as two out of twelve weeks were cancelled due to bank holidays or sickness, and the four open-question quizzes were treated as separate quizzes in the analysis. Thirty (out of 43, which is almost 70%) students participated in all weekly quizzes, and the number increases to 36 (over 83%) if we allow for two (out of 14) absences. (2) The final exam grades were based on a test. The test contained 24 multiple choice questions – each with four available options – which students had to answer in 36 minutes (1.5 minutes per question). The test was administered online because of COVID-19 restrictions, but unlike in 2020 the test bank questions were reworded so that they were not directly searchable on the Internet. To pass the test, the students needed to answer at least half of the questions correctly. (3) The grade average reports the mean of all the students’ final grades. Letter grades were converted to numerical values in the following way: Excellent (A) = 6, Very good (B) = 5, Good (C) = 4, Satisfactory (D) = 3, Sufficient (E) = 2, and Failed (F) = 1. (4) Finally, the course success rate provided data on the ratio of students who passed the course.
Second, I used students’ attendance records based on online video-watching logs. The data included the number of views and focused on the 18 topics of the videos rather than the ten class sessions. Some of the videos had been recorded in the previous semester, and for these, only additional views in the treatment semester were reported. For the new topics covered in the treatment semester, the data comprised the number of students streaming the lectures live and the number of students subsequently watching the recordings.
Third, I relied on quantitative data based on four relevant questions from Masaryk University’s course evaluation form: Q1: The subject has an educational value for me, it enriches me; Q3: The teacher’s explanation was always clear and comprehensible; Q4: The teacher always came to the class well prepared; and Q5: The teacher has clearly stated what knowledge and skills will be assessed. These were based on six-point Likert scale, where the score of six was the best and one was the worst. Nineteen students filled out the course evaluation form.
Finally, to make the comparison of improvement possible, I used all available corresponding data for all the above measures from the previous three runs of the course (2018, 2019, and 2020). I included information from 2018 and 2019 because the 2020 data was problematic for comparison. In 2020, the exam had to be changed to online format at short notice. Using the regular test bank for online assessment led to a very substantial increase in scores, probably related to the availability of correct answers online. Yet, for the sake of transparency, I decided not to exclude data from 2020.
There is only partial evidence in support of the hypothesis that more varied completion requirements compelled students to work harder during the semester and thus positively influenced their performance (H1). The little evidence that exists speaks to sustained efforts during the course rather than increasing performance. As I pointed out above, a notable portion (83%) of students missed no more than two quizzes. Similarly, regarding the project, about 79% of the students sent their papers for the first feedback, and about 25% asked for feedback a second time.
As for attending the online lectures live or watching the recordings at home, the declining height of bars suggests that the level of participation and the number of views of the recordings decreased during the semester (Figure 1). Even more notably, on average, pre-recorded videos from 2020 were much less watched than those broadcast and recorded during the course. This anomaly in the trend may be best explained by one student’s remark in the evaluation form: ‘I personally don’t prefer the self-study from tutor video’. It seems students favour real-time class sessions even when the course is delivered online.
Figure 1. Views of streamed or recorded lectures
When it comes to student performance as measured by grade average and success rate, there was no increase in the success rate or improvement in the grade average compared to previous years (Table 1). While 2020 is an outlier, there were slightly better results in the control year 2019 and no notable differences between 2021 and 2018. In 2019-2020, the grade averages were roughly at the Good (C) grade, while the years 2018 and 2021 had worse grade averages – approximately halfway between Good (C) and Satisfactory (D). The data show that except for the year 2020, with its irregularities, the course success rate was typically between 0.80 and 0.90. The fact that students’ performance was not negatively affected by moving to online delivery is encouraging, but far from my expectations. However, it is precisely the differences in the methods of delivery that make comparison with earlier iterations of the course difficult.
Table 1. Course statistics
Year |
2021 |
2020 |
2019 (control) |
2018 (control) |
Enrolled |
44 |
54 |
26 |
37 |
Success rate |
0.84 |
0.93 |
0.89 |
0.84 |
Grade average |
3.63 |
4.14 |
4.03 |
3.63 |
Examination |
Online |
Online |
In person |
In person |
Nonetheless, the results were closer to the expected outcomes when we look at the performance of the most active subgroups of students in 2021. There was a positive correlation between continuous preparation with a focus on internationalisation and both grade average and success rate. The subgroup of students who participated in all the quizzes had a grade average of 4.20, while the subgroup who handed in the project for feedback featured a grade average of 3.74. In both of the selected examples, the results were better than the overall average of 3.63 for the whole class. Similarly, active students had better course success rates than the class average. Students participating in all the quizzes had a course success rate of 0.97, while students who handed in their projects for feedback had an average course success rate of 0.85.
Concentrating on the final exam test as a separate part of the completion requirements, there were also notable differences in the performance of active students. Students participating in all the quizzes on average answered 64% of test questions correctly, while students who did not participate in all the quizzes had only 54% correct answers. On the other hand, the results for the final exam test were very similar for students who handed in their projects for feedback (60% correct answers) and those who did not (59% correct answers). These results show that the innovation was fairly successful among those students who, as expected, studied continuously throughout the semester.
Nonetheless, the teaching assistant’s observations regarding the student projects shed light not only on why the new approach to assessment might not have been as influential on student learning as suggested in the literature (Brown 2004: 85), but also revealed an unanticipated problem. On the positive side, the final versions of the projects were much more advanced than the initial ones. Around 20% of the projects were of high quality, 50% were adequate, and 30% were submitted probably just to get some points. The teaching assistant noticed that many projects were written in the same style and used the same models, so she carefully checked on the understanding of the models while consulting with the students. The most challenging part for the students was to make some logical interconnection between the models covered in the course and the project topic. This led the teaching assistant to estimate that at least 20% of the projects were written by third parties. Students who do not invest their time and work into completing such an assignment are unlikely to benefit from internationalisation and varied assessment exercises.
The results were similar when it came to the questions of students’ satisfaction with the course and identification with the course completion requirements (H2). While the quantitative data was especially discouraging, there are several interesting takeaways from Table 2 showing students’ responses to the questions on the evaluation form. First, the responses to questions 3 – on the clarity of explanation by the teacher – and 4 – on the teacher’s preparedness – were mostly unchanged in the treatment period compared to earlier years, even after disregarding the year (2020) with irregularities. Second, the scores for the questions on the course’s educational value to the students (Q1) and clarity of knowledge and skills tested in assessment exercises (Q5) were slightly lower in 2021 than the other years. An insightful comparison might be made to 2018 because the course success rate and the grade average in that year were quite similar to 2021. Even in this comparison, the students assessed after the innovation that the educational value of the course was lower and felt that the knowledge to be assessed was less clearly stated. It therefore seems that in the eyes of students more completion requirements carry the burden of less clear rules of what will be assessed and how. As for the educational value of the course, the quantitative data did not help answer why students perceived the innovated course as less valuable.
Table 2. The results from the course evaluation form
Year |
2021 |
2020 |
2019 (control) |
2018 (control) |
Enrolled |
44 |
54 |
26 |
37 |
Responded |
19 |
31 |
8 |
19 |
Q1 |
3.7 |
3.1 |
4.1 |
4.1 |
Q3 |
4 |
3.5 |
4 |
4 |
Q4 |
4.2 |
3.8 |
4.2 |
4.3 |
Q5 |
3.9 |
3.2 |
4.2 |
4 |
The qualitative data revealed that student attitudes toward the innovation were mixed. On the positive side, the teaching assistant felt that the students were quite open to comments and video consultations. During the focus group interviews, students said the semester-long project helped put the material into practice; especially the feedback was valuable to improve their work. Students also felt that the new constitution of the final grade was more student-friendly. They found it reasonable that the final score for the course consisted of grades for several components, and that there was a possibility of diversification and gradual accumulation of points. Questions on the weekly quizzes helped consolidate/repeat the material passed on during the lectures, whereas the open questions were thought too basic and easy to search for answers on the Internet but also a good way to get points at the same time. Besides, the quizzes made students read the course material every week. Otherwise, they said they would have started studying only right before the exam.
However, not all students appreciated the innovation. The following student response is quoted in full[1] to illustrate the difficulties in making students work continuously during the semester and internationalising the curriculum:
‘This subject is hard and to make it harder on us it wasn’t good approach. We are master students I am not bachelor anymore to be tested in this way or weekly reports or even the project was useless for me. From every class I understood 50% and less; how will I be able to put this in the project to use macroeconomic? This way of teaching needs to improve make it useful for once to the students let them love any economic subject’.
Thus, the student felt that the innovations made the course harder to pass rather than being helpful for their learning and did not see weekly tests as appropriate to Master’s level studies. Both sentiments showed his disapproval of the innovation.
[1] Similar to Felten and Lambert (2020: 9), we corrected capitalisation in several places to enhance the clarity of the text.
Even though this innovation focused on the needs of international students, the changes in assessment methods presented in this chapter can be successfully applied to situations in which students face similar challenges. Continuous assessment could be beneficial to students regardless of their country of origin or the class composition. Changes related to internationalisation, and particularly the sharing of experiences between students, are more amenable to a classroom where at least some international students are present. However, careful preparations from the instructors – for example, by bringing in case studies from other countries – could successfully make up for the lack of foreigners in the student body. More widespread use of peer learning methods in the classroom utilising examples from several countries may also be a useful approach. While peer learning is more practical in smaller class sizes, it can be applied to classes this big successfully though with some trade-offs, most likely in the number or depth of concepts covered during classroom time. Nonetheless, it could be a beneficial extension of or alternative to the current innovation.
The present analysis has shown that continuous preparation based on internationalisation leads to higher level learning and uncovered a number of valuable insights. First, students prefer live delivery to pre-recorded lectures. Second, more varied completion requirements may make it less clear which knowledge and skills will be assessed, when students’ previous learning experiences do not include such assessment methods. Third, the semester-long project, which carried the most burden for internationalisation in this innovation, needs changing because extending what students learnt in class and the textbook to include their home countries in a written paper was not motivating enough for many students. To make the assignment more effective, more interaction with and among students is necessary, including a shift toward seminars from a lecture-only course. Peer learning could be an avenue to that end, which would also allow for analysing the impact of engagement with peers – rather than engagement with the material – on student learning.
Finally, this analysis also revealed that the kind of innovations implemented in Macroeconomics 2 were not welcomed by all the students equally despite the fact that the changes were inspired by earlier students’ wishes and complaints. In the future, it would be important to co-opt the kind of students who proved apprehensive toward the new methods of assessment and the different types of preparations required. It is necessary to influence student attitudes so that they focus on learning rather than just on passing the course. While putting an extra effort into explaining the assignment requirements could enhance student performance, for their learning to improve, students should embrace the idea that continuous learning is effective.
- Brown, S. (2004) ‘Assessment for learning’, Learning and Teaching in Higher Education 1:1, pp. 81-89.
- Cosh, J. (2000) ‘Supporting the learning of international students in large group teaching’, in G. Wisker (ed.) Good practice working with international students, Birmingham, UK: SEDA, pp. 29-34.
- Felten P. and Lambert L.M. (2020) Relationship-rich education. How human connections drive success in college, Baltimore, MD: Johns Hopkins University Press.
- Hussein, I. and Schiffelbein K. (2020) ‘University professors’ perceptions of international student needs’, Journal of Applied Learning and Teaching 3:1, pp. 65-70.
- Knight, J. (2003) ‘Updated internationalization definition’, International Higher Education 33, pp. 2-3.
- Mankiw, N.G. (2019) Macroeconomics, 10th edition, New York: Macmillan.
- Wiliam, D. (2000) ‘Formative assessment in mathematics part 3: The learner’s role’, Equals: Mathematics and Special Educational Needs 6:1, pp. 19-22.
- Wiliam, D. and Thompson, M. (2007) ‘Integrating assessment with instruction: What will it take to make it work?’, in C.A. Dwyer (ed.) The future of assessment: Shaping teaching and learning, Mahwah, NJ: Erlbaum, pp. 53-82.