Of all organized educational activities, testing procedures are perhaps the most consequential for the learners. Assessing pupils’ performance warrants fulfillment of educational objectives and impacts learners’ grades and subsequent access to higher education. So-called high stakes tests, that is, tests that have important consequences for test-takers, include standardized national tests in core school subjects.
Aside from playing a central role for upholding educational standards across schools and regions, national tests strongly impact final grades in particular subjects. Objections against standardized testing in compulsory education include, among other things, that it is difficult to measure knowledge and skills correctly in one test, that students experience test anxiety that can affect performance negatively, and that consequences of failing a particular test is high. In Sweden, The Swedish Schools Inspectorate (e.g. 2011; 2013) have criticized core subject national tests and their assessment, as inspections have revealed that test instructions leave too much room for interpretation and that first and second assessments of the same tests sometimes differ substantially. In terms of standardized testing of oral proficiency in a second/foreign language, the setup and the topics assigned for a test naturally play a central role for learner performance, as second language scholars have pointed out. Factors like interlocutor proficiency, topic choice, co-participant and examiner conduct, and learner task understandings all influence interaction in second language speaking tests. Furthermore, in contrast to oral proficiency interviews (OPIs) with one candidate and one examiner, dyadic tests constitute a blend of the two test-takers, and possibly, a teacher present, and individual assessment of a collaborative and interactive product presents additional dilemmas.
In the Testing Talk project, the overreaching aim was to provide a comprehensive description of interaction in paired second language speaking tests, and to relate linguistic and interactional patterns to subsequent test assessments and grades. Our empirical base was the National English Speaking Test (NEST) in year 9 of compulsory school. The project aimed for detailed qualitative description of the relationship between patterns of test interaction and subsequent assessment through comparative analysis of test interaction, assessment criteria, and grades. Using Conversation Analysis (CA) (Sacks, Schegloff & Jeffeson, 1974; Sidnell & Stivers, 2013), the various sub-studies treat test accomplishment as a joint, and inherently social process with certain institutional particulars. As such, the project aimed to contribute to research on the design and assessment of oral proficiency tests and tasks, and to the growing interplay between second language acquisition (SLA) research and CA. Our project design also encompasses teacher interviews, a national web-based survey, and the development of a workshop model for language teachers' in-service training.