Running head: INSTRUCTIONAL CONCEPTS
Platform Plank IV: Purpose of the Curriculum: Curricular Mode
Theme V: Instructional Concepts and Methods
Jeffrey B. Romanczuk
Sevier County School System
Abstract
This plank addresses how to gauge teacher accountability and student achievement, and the best kinds of standards-based education and instructional methods.
Teacher Accountability
There needs to be a way to assess how well teachers are doing at teaching and how well students are doing at learning. Intuitively, it makes sense that the two assessments be linked. But what is the right way to “do” student assessment and how closely should it be linked to teacher accountability? Tyack (1974) notes that the two were linked in early American education only in that standardization inherent in bureaucracy drove both. Ornstein (as cited in Ornstein, Behar-Horenstein, & Pajak [2003]) acknowledges that there are too many variables in standards-based education to credit teacher excellence (or blame poor teaching) for how students do. The point was made during our class discussion of Ornstein (April 1, 2004 class notes) that the way teacher accountability is done in most states does not make teachers or administrators work any harder (or smarter). In this same discussion of Ornstein, an opinion was offered that Tennessee’s system is overburdened and tends toward the superficial as a result. The state’s teacher assessment does have some good future growth and long-range planning components, but its creators were planning for each supervisor tracking only up to six teachers. That hardly any administrator has that few to work with results in a lot of pencil whipping of tracking forms that deserve much more care, thought, planning, and attention. On the bright side, the long-range career planning components have at least added an “organism” angle to what was previously pure “machine” (Morgan, 1997; Curriculum class notes, April 1, 2004).
The way teacher accountability and student assessment are done now tend toward a summative view for students and a snapshot view for teachers. The linkages between the two ignore much more than they account for. In truth, only the end-of-course (Gateway) testing is summative; the annual (Tennessee Comprehensive Assessment Program, TCAP) testing may or may not be summative, depending on how closely the curriculum framework matches the test bank and how closely teachers follow the script day to day. For its part, teacher accountability emphasizes this kind of lesson planning more than student learning, but tends to base a year of tenure evaluation on three half hour supervisor observations more than any other kind of assessment.
Ornstein’s (as cited in Ornstein, Behar-Horenstein, & Pajak [2003]) twelve requirements for improving education and assessing teacher performance mostly call for local control of both processes. Ironically enough, Tyack notes that this local-ownership of the school system is what America had in its earliest country schoolhouse days, but has gotten away from, dismissing the good of it for bureaucratic efficiency.
Our April 1 class notes summarize Eisner’s article as saying standards are not the answer; leadership is. Of course, school leaders would be more likely to say more time is the answer. However, Dr. Norris noted that the mission of the school is the most often overlooked of standards. Portfolio assessment for both teachers and students could go a long way toward satisfying this local link. The national certification tests for teachers require something much closer to a portfolio than what Tennessee requires of teachers for state certification, or even what is required for tenure. On their website, the National Board for Professional Teaching Standards, http://www.nbpts.org/, calls it a “standards-based performance assessment.” The Tennessee Value-Added Assessment System (TVAAS) attempts to link teacher effectiveness with student progress, but as mentioned, Ornstein (as cited in Ornstein, Behar-Horenstein, & Pajak [2003]) admits there are too many variables in this equation. However, TVAAS is required in Tennessee; whereas the national certification is not only optional, but there is no incentive for Tennessee teachers to put in the time and expense involved in getting this national credential.
Student Achievement
Even in special education in Tennessee and several other states, the portfolio is done poorly, but it is still a better indicator of ability and progress than the normative standardized tests it replaces for the lowest functioning students. The TCAP Alternative portfolios are heavily teacher-driven and their subject matter links can be contrived, still they manage to be a good summative assessment of the student, or at worst, a good series of formative assessments. Portfolio assessment aligns best with the brain metaphor: everything is done with the end in mind, but the communication relays are more fluid, less hierarchical than in the organism model (April 1 class notes; Morgan, 1997).
What is not done in student assessment is a needs analysis to gauge where the students’ (or classes’, even) knowledge level is prior to instruction, to best identify the instruction needed. Ornstein and Hunkins (2004) suggest good ways of getting teachers, parents, students, administrators and the community involved in the needs assessment, analyzing the data to identify curricular gaps, and prioritizing ways of filling these gaps. Later in their discussion of curriculum development, Ornstein and Hunkins admit that student “interest” is important, especially to “learner-centered design” (p. 219).
In addition to interest, the “weight” given to each aspect of curricular design and to each subject area (Ornstein & Hunkins, 2004, pp. 245-247) have to be balanced once the needs are identified. Ornstein and Hunkins’ table on p. 266 makes clear that curricular design is where the centering (subject, learner, or problem), emphasis, and underlying educational philosophy have the biggest impact. The essentialists and perennialists tend to be subject centered while the progressives and reconstructionists are either learner- or problem-centered.
While I tend toward subject-centered curricular design, the curriculum development approach I favor is “technical-scientific” (Ornstein & Hunkins, 2004, p. 215). The state’s curriculum framework—even the less jargon intensive bluebook version (http://www.state.tn.us/education/ci/cistandards2001/cibluebklamath.pdf)— attempts to identify and list a distinct progression by subject and grade level. What I am getting at though, is an even more deconstructed curriculum, with distinct parts of each task listed and student progress tracked. Maybe it’s the special education influence. In 2001, I did Dr. Hannum’s summer institute to qualify as a teacher of the severely disabled. This summer course is heavily into task analysis description sheets (TADS, Hannum, 2001; Snell & Brown, 2000). Even Ornstein and Hunkins (on pages 204-205) offer a version of a TADS that gets at the higher levels of Bloom’s Taxonomy. Of course, TADS use is both mastery learning and one-to-one tutoring and reinforcement intensive, so it should be no surprise that TADS can accommodate the span of Bloom’s Taxonomy.
Standards-Based Education
In their article concerning fitting the promise of standards-based education into the school day, Schmoker and Marzano (as cited in Ornstein, Behar-Horenstein, & Pajak, 2003) favor curriculum guides over standards. The coauthors point to team teaching done in connection with the school system’s sense of “what the students needed to learn” (p. 263) for the “promise” of standards-based education evidenced in five given school systems. Their success was despite state standards, not because of them.
Among his twelve ways to improve standards-based education and teacher assessment (Table 22.1, p. 259 of Ornstein, Behar-Horenstein, & Pajak, 2003), Ornstein pushes for local, state, and national alignment of the curriculum and alignment of student assessment with teacher performance assessment. It is not clear from this table whether he would prefer narrower over broader standards so much as better matches among the standards emphasized at the local, state, and national levels. Ideally, the standards and the curriculum guides could be used interchangeably.
States tend to overpack standards while paying lip service to teaching less but doing it more fully (Schmoker & Marzano, as cited in Ornstein, Behar-Horenstein, & Pajak [2003]). The last of Schmoker and Marzano’s three steps to reverse this trend cautions against adding more topics than can be taught and assessed reasonably and effectively (p. 266). But it is tough to know ahead of time how much is too much and it is even tougher to get rid of standards once they are put in print.
Bravmann (2004) adds that the emphasis on “adequate yearly progress” (inherently a summative evaluation), is the main problem with the “No Child Left Behind” Act. Not only are the AYP goals set so progressively high that every school will “fall short” (p. 56), but the law ignores diagnostic and placement assessments and the kind that occur everyday in schools: formative assessments. Bravmann’s main point is that schools and teachers need to be using the information gleaned from each kind of assessment (formative, summative, diagnostic, and placement) to adjust instruction.
Instructional Methods
Bloom observes that if summative assessment is going to be stressed, then mastery learning—or even better—one-to-one tutoring is the best matching instructional method. In truth, “whatever works” student to student and class to class is the best kind of instruction to use. This kind of methodological versatility best matches the “brain” metaphor (Morgan, 1997), because the teacher has to learn from individual students and classes what works best and does not work at all, and adjust instruction accordingly. But this does not work when the material to be covered is overpacked. Schmoker and Marzano’s advice to reverse this trend is easier said than done.
In his own search for appropriate methods of instruction, Bloom (as cited in Ornstein, Behar-Horenstein, & Pajak [2003], p. 214) found that the percentage of summative achievement levels were higher for one to one tutoring (90%), than for one to thirty mastery learning (70%) or convention instruction (20%). Bloom also found that even more so than the effects of the materials used, the peer group, home or school environment, and teacher interaction influence students’ attending to the lesson, participating in, and understanding it (pp. 219-223).
Summary
No matter what pedagogical obfuscation is added with educational jargon, the best instructional method to use always comes down to “whatever works.” The best student assessment is always formative (see task analysis description sheet information above). The best way to measure student achievement (and teacher achievement, for that matter) is by portfolio. Even better, teacher accountability would be obvious by contrasting an early needs analysis with a span of formative assessments of each student served over the school year; in short, a portfolio.
References
Bravmann, S. L. (2004). Assessment’s ‘fab four’: They work together, not solo. Education Week, 56, March 17.
Hannum, M. (2001). Effective preparation for teaching in a comprehensive classroom (certification internship course binder). Knoxville: University of Tennessee.
Morgan, G. (1997). Images of organization. Thousand Oaks: Sage.
Orstein, A. C., Behar-Horenstein, L. S., & Pajak, E. F. (2003). Contemporary issues in curriculum (3rd ed.). Boston: Allyn and Bacon.
Orstein, A. C., & Hunkins, F. (2004). Curriculum foundations: Principles and theory (4th ed). Boston: Allyn and Bacon.
Snell, M. E., & Brown, F. (2000). Instruction of students with severe disabilities (5th ed.). Upper Saddle River: Merrill.
Tyack, D. B. (1974). The one best system: A history of American urban education. Cambridge: Harvard University Press.