The recently updated ILTA Bibliography of PhDs in Language Testing lists a PhD dissertation on Second Life tasks for aviation English tests. Unfortunately, I could not find the document online. Let us hope Dr. Park will publish it later, be it on the university website or as a book.
Park, M. (2015). Development and validation of virtual interactive tasks for an aviation English assessment (Unpublished doctoral dissertation). Iowa State University, Ames.
In response to growing concerns over aviation safety stemming from the limited command of aviation English by non-native English speaking practitioners, this study aimed to demonstrate the development process of aviation English test tasks in a virtual environment and investigate the validity for a task-based aviation English performance assessment in the context of Korean Army Aviation. The development and validation of the virtual interactive tasks for aviation English assessment were based on four inferences – domain description, evaluation, generalization, and explanation – and underlying assumptions in an interpretive argument that developed with reference to argument-based validity (Chapelle et al., 2008; Kane, 2006), evidence-centered design (Mislevy et al., 2002, 2003, 2006), target language use situation analysis for test development in language for specific purposes (Douglas, 2001), and task-based language assessment (Long & Norris, 2001; Norris et al., 1998). Adopting a mixed method with a triangulation design, qualitative and quantitative evidence was collected to provide valid support for the inferences and to strengthen the validity argument.
Based on a task-based needs analysis with 81 military air traffic controllers on the required aviation English knowledge, skills, processes, target tasks, and task procedures in the TLU situations, virtual interactive aviation English tasks were developed in Second Life. A total of 20 controllers completed the prototype virtual interactive tasks for aviation English assessment, and their output was then rated by two rater groups, one engaging in task-centered rating and one accomplishing language-centered rating. Data included 20 task-based performance assessment sample audio files; 19 follow-up test taker interviews and online survey questionnaires; three language-centered raters’ post-rating questionnaire responses; two task-centered raters’ post-rating interview transcripts; military aviation English training manuals and references; and coded transcripts of 12 test takers’ stimulated recall and their actual task performance. The validity evidence collected in the various phases of test development and validation serves as backing for the four inferences in the interpretive argument as well as provides invaluable resources for the revision of the prototype virtual interactive tasks for aviation English assessment.