• Academic speaking: does the construct exist, and if so, how do we test it?

      Inoue, Chihiro; Nakatsuhara, Fumiyo; Lam, Daniel M. K.; Taylor, Lynda; University of Bedfordshire (2018-03-14)
    • Contriving authentic interaction: task implementation and engagement in school-based speaking assessment in Hong Kong

      Lam, Daniel M. K.; Yu, Guoxing; Jin, Yan; University of Bedfordshire; University of Bristol; Shanghai Jiaotong University (Palgrave Macmillan, 2016-01-01)
      This chapter examines the validity of the Group Interaction task in a school-based speaking assessment in Hong Kong from the perspectives of task implementation and authenticity of engagement. The new format is intended to offer a more valid assessment than the external examination by eliciting ‘authentic oral language use’ (HKEAA, 2009, p.7) in ‘low-stress conditions’ (p.3), and emphasizes the importance of flexibility and sensitivity to students’ needs in its implementation. Such a policy has then been translated into diverse assessment practices, with considerable variation in the amount of preparation time given to students. The present study draws on three types of data, namely 1) students’ discourse in the assessed interactions, 2) stimulated recall with students and teachers, and 3) a mock assessment, where the group interaction task, the preparation time, and the post-interview were all video-recorded. Results show that while the test discourse exhibits some features that ostensibly suggest authentic interaction, a closer examination of students’ pre-task planning activities reveals the contrived and pre-scripted nature of the interaction. Implications for the assessment of students’ interactional competence and recommendations for task implementation are discussed.
    • Developing tools for learning oriented assessment of interactional competence: bridging theory and practice

      May, Lyn; Nakatsuhara, Fumiyo; Lam, Daniel M. K.; Galaczi, Evelina D. (SAGE Publications, 2019-10-01)
      In this paper we report on a project in which we developed tools to support the classroom assessment of learners’ interactional competence (IC) and provided learning oriented feedback in the context of preparation for a high-stakes face-to-face speaking test.  Six trained examiners provided stimulated verbal reports (n=72) on 12 paired interactions, focusing on interactional features of candidates’ performance. We thematically analyzed the verbal reports to inform a draft checklist and materials, which were then trialled by four experienced teachers. Informed by both data sources, the final product comprised (a) a detailed IC checklist with nine main categories and over 50 sub-categories, accompanying detailed description of each area and feedback to learners, which teachers can adapt to suit their teaching and testing contexts, and (b) a concise IC checklist with four categories and bite-sized feedback for real-time classroom assessment. IC, a key aspect of face-to-face communication, is under-researched and under-explored in second/foreign language teaching, learning, and assessment contexts. This in-depth treatment of it, therefore, stands to contribute to learning contexts through raising teachers’ and learners’ awareness of micro-level features of the construct, and to assessment contexts through developing a more comprehensive understanding of the construct.
    • Development of empirically driven checklists for learners’ interactional competence

      Nakatsuhara, Fumiyo; May, Lyn; Lam, Daniel M. K.; Galaczi, Evelina D.; University of Bedfordshire; Queensland University of Technology; Cambridge Assessment English (2019-03-27)
    • Don't turn a deaf ear: a case for assessing interactive listening

      Lam, Daniel M. K.; ; University of Bedfordshire (Oxford University Press, 2021-01-11)
      The reciprocal nature of spoken interaction means that participants constantly alternate between speaker and listener roles. However, listener or recipient actions – also known as interactive listening (IL) – are somewhat underrepresented in language tests. In conventional listening tests, they are not directly assessed. In speaking tests, they have often been overshadowed by an emphasis on production features or subsumed under broader constructs such as interactional competence. This paper is an effort to represent the rich IL phenomena that can be found in peer interactive speaking assessments, where the candidate-candidate format and discussion task offer opportunities to elicit and assess IL. Taking a close look at candidate discourse and non-verbal actions through a conversation analytic approach, the analysis focuses on three IL features: 1) listenership displays, 2) contingent responses, and 3) collaborative completions, and unpacks their relative strength in evidencing listener understanding. This paper concludes by making a case for revisiting the role of interactive listening, calling for more explicit inclusion of IL in L2 assessment as well as pedagogy.
    • The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test

      Inoue, Chihiro; Lam, Daniel M. K.; Educational Testing Service (Wiley, 2021-06-21)
      This study investigated the effects of two different planning time conditions (i.e., operational [20 s] and extended length [90 s]) for the lecture listening-into-speaking tasks of the TOEFL iBT® test for candidates at different proficiency levels. Seventy international students based in universities and language schools in the United Kingdom (35 at a lower level; 35 at a higher level) participated in the study. The effects of different lengths of planning time were examined in terms of (a) the scores given by ETS-certified raters; (b) the quality of the speaking performances characterized by accurately reproduced idea units and the measures of complexity, accuracy, and fluency; and (c) self-reported use of cognitive and metacognitive processes and strategies during listening, planning, and speaking. The results found neither a statistically significant main effect of the length of planning time nor an interaction between planning time and proficiency on the scores or on the quality of the speaking performance. There were several cognitive and metacognitive processes and strategies where significantly more engagement was reported under the extended planning time, which suggests enhanced cognitive validity of the task. However, the increased engagement in planning did not lead to any measurable improvement in the score. Therefore, in the interest of practicality, the results of this study provide justifications for the operational length of planning time for the lecture listening-into-speaking tasks in the speaking section of the TOEFL iBT test.
    • The IELTS Speaking Test: what can we learn from examiner voices?

      Inoue, Chihiro; Khabbazbashi, Nahal; Lam, Daniel M. K.; Nakatsuhara, Fumiyo; University of Bedfordshire (2018-11-25)
    • Interactional competence with and without extended planning time in a group oral assessment

      Lam, Daniel M. K. (Routledge, Taylor & Francis Group, 2019-05-02)
      Linking one’s contribution to those of others’ is a salient feature demonstrating interactional competence in paired/group speaking assessments. While such responses are to be constructed spontaneously while engaging in real-time interaction, the amount and nature of pre-task preparation in paired/group speaking assessments may have an influence on how such an ability (or lack thereof) could manifest in learners’ interactional performance. Little previous research has examined the effect of planning time on interactional aspects of paired/group speaking task performance. Within the context of school-based assessment in Hong Kong, this paper analyzes the discourse of two group interactions performed by the same four student-candidates under two conditions: (a) with extended planning time (4–5 hours), and (b) without extended planning time (10 minutes), with the aim of exploring any differences in student-candidates’ performance of interactional competence in this assessment task. The analysis provides qualitative discourse evidence that extended planning time may impede the assessment task’s capacity to discriminate between stronger and weaker candidates’ ability to spontaneously produce responses contingent on previous speaker contribution. Implications for the implementation of preparation time for the group interaction task are discussed.
    • Learning oriented feedback in the development and assessment of interactional competence

      Nakatsuhara, Fumiyo; May, Lyn; Lam, Daniel M. K.; Galaczi, Evelina D.; Cambridge Assessment English; University of Bedfordshire; Queensland University of Technology (Cambridge Assessment English, 2018-01-01)
      This project developed practical tools to support the classroom assessment of learners’ interactional competence (IC) and provide learning-oriented feedback in the context of Cambridge English: First (now known as B2 First). To develop a checklist, accompanying descriptions and recommendations for teachers to use in providing feedback on learners’ interactional skills, 72 stimulated verbal reports were elicited from six trained examiners who were also experienced teachers. They produced verbal reports on 12 paired interactions with high, mid, and low interactive communication scores. The examiners were asked to comment on features of the interaction that influenced their rating of candidates’ IC and, based on the features of the performance they noted, provide feedback to candidates. The verbal reports were thematically analysed using Nvivo 11 to inform a draft checklist and materials, which were then trialled by four experienced teachers in order to further refine these resources. The final product comprises (a) a full IC checklist with nine main categories and over 50 sub-categories which further specify positive and negative aspects, accompanying detailed description of each area and feedback to learners, and (b) a concise version of the IC checklist with fewer categories and ‘bite-sized’ feedback to learners, to support use by teachers and learners in real-time. As such, this research addressed the area of meaningful feedback to learners on IC, which is an essential component of communicative language and yet cannot be effectively addressed via digital technologies and therefore needs substantial teacher involvement. This study, in line with the Cambridge English Learning Oriented Assessment (LOA) approach (e.g. Hamp-Lyons and Green 2014, Jones and Saville 2014, 2016), took the first step to offering teachers practical tools for feedback on learners’ interactional skills. Additionally, these tools have the potential to be integrated into the learning management system of the Empower course, aligning classroom and standardised assessment.
    • Towards new avenues for the IELTS Speaking Test: insights from examiners’ voices

      Inoue, Chihiro; Khabbazbashi, Nahal; Lam, Daniel M. K.; Nakatsuhara, Fumiyo (IELTS Partners, 2021-02-19)
      This study investigated the examiners’ views on all aspects of the IELTS Speaking Test, namely, the test tasks, topics, format, interlocutor frame, examiner guidelines, test administration, rating, training and standardisation, and test use. The overall trends of the examiners’ views of these aspects of the test were captured by a large-scale online questionnaire, to which a total of 1203 examiners responded. Based on the questionnaire responses, 36 examiners were carefully selected for subsequent interviews to explore the reasons behind their views in depth. The 36 examiners were representative of a number of differing geographical regions and a range of views and experiences in examining and giving examiner training. While the questionnaire responses exhibited generally positive views from examiners on the current IELTS Speaking Test, the interview responses uncovered various issues that the examiners experienced and suggested potentially beneficial modifications. Many of the issues (e.g. potentially unsuitable topics, rigidity of interlocutor frames) were attributable to the huge candidature of the IELTS Speaking Test, which has vastly expanded since the test’s last revision in 2001, perhaps beyond the initial expectations of the IELTS Partners. This study synthesized the voices from examiners and insights from relevant literature, and incorporated guidelines checks we submitted to the IELTS Partners. This report concludes with a number of suggestions for potential changes in the current IELTS Speaking Test, so as to enhance its validity and accessibility in today’s ever globalising world.
    • What counts as ‘responding’? Contingency on previous speaker contribution as a feature of interactional competence

      Lam, Daniel M. K. (Sage, 2018-05-10)
      The ability to interact with others has gained recognition as part of the L2 speaking construct in the assessment literature and in high- and low-stakes speaking assessments. This paper first presents a review of the literature on interactional competence (IC) in L2 learning and assessment. It then discusses a particular feature – producing responses contingent on previous speaker contribution – that emerged as a de facto construct feature of IC oriented to by both candidates and examiners within the school-based group speaking assessment in the Hong Kong Diploma of Secondary Education (HKDSE) English Language Examination. Previous studies have, similarly, argued for the importance of ‘responding to’ or linking one’s own talk to previous speakers’ contributions as a way of demonstrating comprehension of co-participants’ talk. However, what counts as such a response has yet to be explored systematically. This paper presents a conversation analytic study of the candidate discourse in the assessed group interactions, identifying three conversational actions through which student-candidates construct contingent responses to co-participants. The thick description about the nature of contingent responses lays the groundwork for further empirical investigations on the relevance of this IC feature and its proficiency implications.