Citation
Cheng, B. H., Ructtinger, L., Fujii, R., & Mislevy, R. (2010). Assessing Systems Thinking and Complexity in Science (Large-Scale Assessment Technical Report 7). Menlo Park, CA: SRI International.
Introduction
Many scientific phenomena can be conceptualized as a system of interdependent processes and parts. The motion of the planets in our solar system, the balance of predator and prey populations in ecosystems, or the group‐think of ant behavior are all systems that are well– described and taught as a group of interacting components as well as a macro phenomena that are the collective outcome of those interactions. The language of systems provides scientists and science students a means to analyze andc ommunicate about phenomena, a useful tool in the scientific enterprise that enables recognizing how multiple factors interact and predicting patterns of change over time. Because understanding phenomena as systems, and general characteristics of systems more broadly, are important competencies that students at all grade levels are expected to develop, it is important to be able to assess these proficiencies.
Three challenges, however, face assessment designers in this domain. First, is the task of recognizing and carefully tracking task demands. While this is a challenge to all assessment design, the topic area (systems thinking and complexity, at the higher grade levels) presents an especially hard‐to‐understand set of ideas for both students and designers. The relevant background knowledge a designer might need to design a task in this area is significant and spread across many domains. A second challenge to assessment designers is understanding the complex relationships between systems thinking and the content or context in which the task is situated; the interplay of required and necessary, but not focal knowledge can prove difficult. And finally, designers are tasked with identifying age or grade appropriate competencies in this domain.
Addressing these three challenges, this technical report provides support for designing tasks that assess systems thinking, in the form of a design pattern. Design patterns are used in architecture and software engineering to characterize recurring problems and approaches for solving them such as Workplace Enclosure for house plans(Alexander, Ishikawa,&Silverstein, 1977) and Interpreter for object‐oriented programming(Gamma,Helm,Johnson,&Vlissides, 1994). Design patterns for assessment likewise help domain experts and assessment specialists“fill in the slots”of an assessment argument built around recurring themes in learning (Mislevy, Steinberg,&Almond, 2003). The particular form of design patterns presented here were developed in the“AnApplication of Evidence‐ Centered Design to State.Large‐Scale Assessments”project funded by the National Science Foundation’s DR K‐12 initiative.