Welcome to the testmaster site for
Student Assessment Research

and discussion


Home
testmaster
My Vision
Published Work
TDLB Awards
3D Assessment
MCQ's
Control
Higher Level Thinking
Security Issues
Action at Last?
My Letter to Gov
Gov. Action?
Comment

Contact Me?

Higher Level Thinking Opportunities

Contents of this Section


The Current Situation.

The (generally) accepted format for an MCQ test item is a closed question followed by four choices; one of which is a correct answer, and three 'distracters'. One distractor is close to correct and two completely incorrect. The marking is of a correct or incorrect response.

The MCQ test item can be criticised as often being a test of simple recall, or perhaps of low-level 'understanding'. Many teachers still think of higher order thinking skills in terms of Bloom's 'Taxonomy', although the utility of this has been discredited (Wood R. 1993 p48). Wood suggests that the failure to substantiate taxonomies of skills "may not matter providing a penetrating analysis of what students ought to be able to do is carried out".

 The notion of a 'penetrating analysis of what students ought to be able to do ' has me interested. I have little doubt that such an analysis can be done in a way that would eventually produce a definitive guide to the skills required in particular situations, and across a defined 'range' of contexts.

The concept of 'what a student ought to be able to do' covers three broad categories:

  • Cognitive Skills (thinking)
  • Affective Skills (attitude and motivation)
  • Motor Skills (physical activity)

It is my belief (based upon many years teaching) that to specify all the requirements covering these three essential elements is an enormous job for even a petty task. For instance, try doing this for a simple task such as taking a cash payment and giving change.

The implied suggestion that groups of people (presumably) both skilled and non-skilled in conducting such an analysis and writing such statements, can sit down and do such work without having very considerable ability, training and constant guidance is complete nonesence. To suggest that someone will be prepared to pay them to do it is even more fanciful.

The belief that written statements (however well done) can specify, without the need for constent interpretation, skills, the context and level of achievement to all teachers and students is touching in it's naivety.

The current examples of such 'analysis' involve large numbers of outcome and range statements; the completion of which is supposed to indicate competence. Documenting the completion of these statements is a huge task - one hairdressing course in the UK, consisting of 25 students, requires the lecturer to complete 250,000 tick boxes and make inumerable 'written statements' to verify competence.

The number of trees that this requires in order to document it and the number of forests required to produce the forms for teachers to complete in order to generate 'evidence' is Crazy! And consider what happens when someone changes something or discovers something new!

The advocacy of this 'analytical approach' implies that we can and should produce ISO 9000 type statements for each skill and knowledge item.

The philosophical argument behind this may be familiar to you, so I will not elaborate, but, think of the danger of believing that you know, and have specified, all the right answers! Who will dare not to be 'right'?

The production of a 'penetrating analysis' does not, of itself, guarantee to identify 'higher order thinking skills'. There may be none. And what is meant by 'higher order thinking skills' anyway? The criticism by Wood of Bloom's 'arbitrary' levels of thinking I feel is unfair. The application of 'arbitrary' levels still provides a tool that is of more practical use to the average teacher than for example, 16 ill defined performance criteria and 10 vague range statements. The use of 2-3 extra range statements is often used to separate 'level 2' from 'level 3' performance - surely an equally 'arbitrary' situation?

There is, therefore, little possibility of expanding the scope of the paper-based MCQ item because of:

  •  the nature of the paper based test;
  • the right/wrong nature of the marking;
  • difficulties in writing suitable question items;
  • difficulties in writing suitable distracters;
  • considerations for the candidate's reading comprehension level.

  

Perhaps….

 Most reasonable, skilled, people, 'seeing' the performance of a skill by a student under a particular constraint, and within a given context, would be able to grade the performance. A skilled teacher, practitioner or assessor does not need every aspect of the assessment documented in order to judge performance! Only the unskilled observer may claim to need this, in which case, why are they assessing?

Instead of producing tons of paper, why not use the richer media of 'pictures' or virtual reality to define the context or range, and judge the performance within this context? A computer based video of a scenario, a picture, an animated story line etc. could reduce the complexity of paper based statements by an order of magnitude or three. The use of pictures, video, animations, virtual reality or even speech provides a 'richness' to the assessment situation that could never be matched on paper, and one which most people can assimilate and relate to with greater ease than text.

Although frowned upon in a paper-based test, the use of items that are linked to form a chain, perhaps branching or dependant upon each other, may offer a way of testing more than the recall of a 'correct' answer. It may allow valid testing of the candidate's ability to 'reason', weigh the advantages and disadvantages of each decision, and use informed judgement to form a response that contributes to a known, pre-defined 'aim'. Now, I maintain that this would form the basis of an assessment of higher order thinking skills, and the format I am suggesting may seem familiar. Yes, you have it - a type of interactive computer game! 

NOTE:
For an example of software that goes some way towards this concept, see the Quandary product from Halfbakedsoftware.com who appear to be moving down this route.
 

Educational theorists of the behavioural school insist that the only way to test knowledge and skill is to have it demonstrated - if it is not demonstrated it does not exist. Trying to define the nature of every skill and piece of knowledge is madness, potentially very dangerous and an obvious failure. The bureaucracy is HORRENDOUS!

 So, do not document the minutiae, build them into a 'scenario' which is graphically based. Instead of questions, write a story line that requires the candidate to demonstrate simple knowledge with simple responses. Require them to prove higher levels of competence by presenting situations that require the application of many knowledge items, the ability to prioritise decisions and responses, or to invite previously untaught answers by requiring a synthesis of many different factors.

Potentially at least, there is the possibility of developing multi-user assessment situations. Here, you may need to demonstrate high order thinking skills by having to use judgement and skill in order to overcome an 'opponent' or to cope with their errors or the results of their decisions. 

I believe that these ideas have real potential for a truly effective, valid, reliable, bias and bureaucracy free assessment system that we need for the next millennium.

1