Types of language assessment instruments
Designing and writing a quiz or test, requires that we consider just what it is we want to measure and why. One way to describe language assessment instruments is according to their function or purpose – that is, for administrative, instructional, or research purposes (Jacobs, Zingraf, Wormuth, Hartfiel, & Hughey, 1981). In fact, the same test could conceivably be used for twelve different purposes: five administrative purposes (assessment, placement, exemption, certification, promotion), four instructional purposes (diagnosis, evidence of progress, feed-back to the respondent, evaluation of teaching or curriculum), and three research purposes (evaluation, experimentation, knowledge about language learning and language use). The average test will probably only be used for one or perhaps two purposes by a given individual. Sometimes, teachers, administrators, and researchers will use the same test for their respective purposes. More information on the purposes of assessment can be found in the Why Assess section.
Criteria for describing assessments
When describing assessments, the distinction is often made between proficiency tests, intended for administrative purposes, and achievement tests, intended for assessment of instructional results.
Administrative, instructional, and research purposes are represented in the graphic below:
Norm-referenced and Criterion-referenced
A distinction in testing is made between norm-referenced and criterion-referenced assessment as well. A test can be used, for example, to compare a student with other students, whether locally (e.g., in a class), regionally, or nationally as in SAT or ACT tests. Classroom, regional, or national norms may be established to interpret just how one student compares with another. Sometimes teachers speak of using a “curve,” which simply means that they evaluate a student’s performance in comparison with that of other students in the same class or in other classes.
A test can also be used to determine whether a respondent has met certain instructional objectives or criteria. For this reason, such a test would be referred to as “criterion-referenced” assessment.
Communicative and Strategic Competence
The now seminal effort by Canale and Swain (1980; Canale, 1983) to define communicative competence provided another set of criteria for describing tests. Tests are seen as tapping one or more of the four components that make up communicative competence:
Grammatical competence was seen to encompass “knowledge of lexical items and of rules of morphology, syntax, sentence-grammar semantics, and phonology” (Canale and Swain, 1980, p. 29).
Discourse competence was defined as the ability to connect sentences in stretches of discourse and to form a meaningful whole out of a series of utterances.
Sociolinguistic competence was defined as involving knowledge of the sociocultural rules of language and of discourse.
Strategic competence was seen to refer to “the verbal and nonverbal communication strategies that may be called into action to compensate for breakdowns in communication due to performance variables or due to insufficient competence” (Canale and Swain, 1980, p. 30).
While Canale and Swain's strategic competence puts the emphasis on "compensatory" strategies – that is, strategies used to compensate or remediate for a lack in some language area, the term has come to take on a broader meaning: Bachman (1990) provided a broader theoretical model of strategic competence by separating it into three components. Later, Bachman and Palmer (1996) refined the Bachman (1990) categories for strategic competence to include four components:
Assessment – Respondents (in our case of language testing) assess which communicative goals are achievable and what linguistic resources are needed;
Goal-setting – Respondents identify the specific tasks to be performed;
Planning – Respondents retrieve the relevant items from their language knowledge and plan their use;
Execution – Respondents implement the plan. Hence, this latest framework for strategic competence is broad and includes test-taking strategies within it.
Sociocultural and sociolinguistic competence
Over the last several decades, the Canale and Swain model has also undergone several modifications. For example, sociolinguistic competence is seen as encompassing two relatively distinct components:
Sociocultural component – assesses the appropriateness of the strategies selected for language performance in a given context, taking into account (1) the culture involved, (2) the age and sex of the speakers, (3) their social class and occupations, and (4) their roles and status in the interaction.
For example, whereas in some cultures (such as in the U.S.) it may be appropriate for speakers to suggest a time a boss when they will get a report in after having missed missing a deadline, in other cultures (in Israel, for example) such a repair strategy might be considered out of place in that it would most likely be the boss who determines what happens next.
The scale for sociocultural ability also rates what is said in terms of the amount of information required in the given situation, and the relevance and clarity of the information provided.
Sociolinguistic component – assesses the use of linguistic forms in language performance.
For example when a student bumps into a professor, spilling her coffee on the professor’s dress, “Sorry!” would probably constitute an inadequate apology. This category assesses the speakers’ control over the actual language forms used to realize the speech function, in this case referred to as a speech act (such as, “sorry,” “excuse me,” “very sorry,” “really sorry”), as well as their control over register or formality of the utterance from most intimate to most formal language.