Home Email Index

 

This is the second article in a series written to develop a way of talking about HCI that would clear up some of the many confusions and allow some theoretical development to take place. Essentially, this work has been a part-time effort for me and, as there is so much to be done, I can't see me getting through a tenth of what I'd like to cover. Still, it's the same with most things in life.

This paper appeared in the journal 'Interacting with Computers' in 1994. I was one of the founders of this journal and served on its first editorial board. It is still going strong despite this and still spreading the word about good user interface design. You can find out more about it at the British Computer Society site.

 

 

A Conceptualisation of Multi-Party Interaction

 

Graham Storrs

Abstract

An ontology is presented for the field of Human-Computer Interaction. This amplifies and extends an earlier version of the conceptualisation (Storrs, 1989). The paper argues that such a conceptualisation is a necessary step in the development of theory in HCI and discusses the need for and the nature of such a theory. It is argued that no adequate theory of HCI exists at present and this paper does not attempt to offer one. The model proposed is based on the idea that an interaction is an exchange of information between participating agents through sets of information channels (interfaces) for the purpose of altering their states. These notions are defined and the paper pays particular attention to the concepts of participant, interaction and purpose, describing several different types of participant and the different rôles they may play as well as various dimensions and elements of interactions. Finally, the strong and weak points of the conceptualisation are discussed in an attempt to assess its value.

Introduction

The Purpose of This Paper

This paper extends the conceptualisation of human-computer interaction presented in Storrs (1989). The main purpose is to clarify the concepts further and to ensure that the model works for multi-party interactions. While most human-computer interaction currently involves only two parties, there is the possibility that this will change in the future as computer-supported co-operative working (CSCW) becomes more common. Even if the technology does not become more widespread, the conceptual tools for dealing with such interactions need to be available.

The present paper does not present a theory of HCI but, in developing this model, the intention is to help us move towards one. The first part of the paper discusses the need for such a theory and the next explains what kind of thing might serve as a theory in this discipline. The core of the paper, however, is the description of the proposed conceptualisation. Finally, an attempt is made to evaluate the model's adequacy and a few concluding remarks are made.

Why Bother With Theory?

Is HCI a "discipline" in its own right? Can there be a theory of HCI that is distinct from theories in Psychology or Computer Science? These are perennial questions and the subject of many panel sessions at HCI conferences each year. Practitioners in the field are fairly optimistic that there could be a theory of HCI but are far from sure about what such a thing might be.

HCI is not wholly lacking in theory. It is simply that the theories that exist are vague and weak. They are not precise or powerful enough to make what we know about users interacting with systems into a coherent whole. If we look at Norman's characterisation of human-computer interaction as an example of one of the better attempts (Norman 1986) we find that it cannot tell us much about why the details of observed interactions are the way they are. Much of what we observe taking place when users work with computers can only loosely be fitted to this model. More recently, Carroll's Task-Artefact model (Carroll and Campbell, 1988; Carroll, 1989) represents a considerable shying away from theory towards a view of HCI as, almost, a craft. The reasons are compelling—the need for "infinite detail" and the difficulty of "emulation"—but artefacts are not theories and they will not serve as theories.

Having a theory of HCI would definitely lend a degree of credibility to the subject which, certainly as a scientific discipline, it is currently lacking. But having a theory does a lot more than just this.

Theories Give Coherence to Data

Theories organise data, they say how particular data relate to others, they reveal the patterns—particularly the patterns of causality—that underlie the data and they say which data are in and which are out of scope. It is common, even in very mature sciences, for there to be not one theory but several theories, each covering some specific part of the domain. Such a state can be thought of as an intermediate stage between no theory and a grand, unifying theory. As we move from the present state of affairs in HCI towards the (perhaps unattainable) state of having a single, unified theory, we might expect small "local" theories to appear which each make sense of some part of the domain and for these "local" theories to be swallowed into theories of larger scope as our understanding improves.

Theories Predict Data

To be of practical use in the engineering disciplines (such as software engineering), theories should predict existing findings and they should predict new data not yet found. If the theory does neither, then, at best, we can say it is not very useful. More probably, we would say it was wholly inadequate or not a theory at all.

This point is of vital importance to the practice of designing user interactions. If we do not have a theory that says that an interaction to suit this task and these users in these physical and social environments under these constraints should have these precise properties, we are unable to say that we know how to design usable systems. At the moment, practitioners work with a mixture of personal experience, rules of thumb (guidelines), psychological theory (occasionally) and personal taste. Sometimes this is successful in some degree. We know that we can design usable systems but we do not know how we do it.

Theories 'explain' data.

Explanation is itself a vague and slippery notion but it is to do with the subjective impression of understanding. An explanation of something is an utterance which gives one the feeling that one has gained some understanding of that something. Theories can stand as explanations for the data within their scope. The subjective nature of this might seem to imply that it is a frivolous use of a theory but I suspect that it is one of the most valuable uses that theories are ever put to.

Theories Develop

Data merely accrue. Each new user interaction in use represents a mass of data that can be observed and noted. To the very limited extent that this has been done, we have acquired data. We certainly need to acquire more but if we had minutely observed every user who ever used any computer, we would still not find ourselves making any progress without theories with which to interpret our observations. In effect, any theory will do because the process of finding adequate theories is one of criticising and repairing existing, inadequate theories (Popper 1959, Kuhn 1962, Lakatos 1976). The brave pioneers such as Norman and Carroll (cited earlier), are doomed to be criticised by those who follow them but their contribution will be considerable because they are seeding the process that will eventually lead to genuinely useful theory. In the words of Karl Rogers:

"There will be no apology for this "one-sided" presentation. It appears to the writer that the somewhat critical attitude that is usually held towards anything which may be defined as a "school of thought" grows out of a lack of appreciation of the way in which science grows. In a new field of investigation which is being opened up to objective study, the school of thought is a necessary cultural step. " (Rogers, 1951, p8)

 

What Does a Theory Need?

We have discussed why HCI needs a theory. We must now say more clearly what we think a theory is.

A Domain

Firstly, a theory must be about something. That is, it must have a domain. The relationship between a theory and its domain is not a simple one as part of the job of the theory is to say what phenomena it does and does not cover. In developing a theory of HCI, for instance, we may end up with a new definition of the field. For the purposes of this paper, I have taken the domain to be human interactions which involve computers but, as you will see, this is refined considerably and may not end up matching some people's conception of what the domain of a theory of HCI should be.

A Conceptualisation

A conceptualisation expresses a view of what are the essential entities of a theory's domain (its ontology) and how these entities are related. Parsimony is an important principle to observe in developing a conceptualisation because if we are working with a set of concepts that is larger than it needs to be, the theory we build will be more complicated than it needs to be and hence more difficult to use and to develop. Precision is another principle of considerable importance. If we are not as precise as we can be at the level of our conceptualisation, the whole theory will be flawed and imprecise, making development difficult because the interpretation of observation is difficult.

Facts

So far, I have been using the word 'data' in a rather loose way. Let us now say that data are the (as far as possible) uninterpreted results of observation—the number of keystrokes the user made, the width in pixels of a scroll bar, the verbatim transcript of a talking-aloud protocol session, etc.. Facts, we will say, represent a relationship between the concepts in the conceptualisation and the data we have observed. Facts, therefore, are not neutral, they are creatures of our theory; a way of talking about our observations in terms of the concepts we have decided are appropriate. We might, for instance, have a conceptualisation of the use of software that involves concepts such as "launching" and "quitting" "applications". Under this conceptualisation, it would be a fact that the user chose to quit the application. It would be data that she or he exerted a certain pressure with a certain finger on the mouse button while the screen bitmap was in a certain state. This is not to say that fingers and bitmaps are not also concepts, just that they are not, in this case, concepts drawn from the conceptualisation we are using to help us understand the user's behaviour. Interpreting our data into facts helps us to interpret the whole interaction. We must, nevertheless, bear in mind that the whole edifice of our theory and its value is based on such mappings between observations and interpretations.

Laws

Laws are the final ingredient in our theory. They express regularities among our facts. If the laws we derive are predictive and quantitative, we have succeeded in building a useful theory. If they cover a large proportion of our domain, we will think it is an adequate theory. If it helps us extend the range of our knowledge, we will say it is a good theory. At the moment, apart from the handful of psychophysiological laws we borrow from psychology and ergonomics, HCI has no theory with laws of this sort.

A Proposal

I do not propose to present a theory of HCI. Instead, I hope to do the much easier and earlier step of developing a conceptualisation of HCI. Looking back at my earlier definition, this involves expressing "a view of what are the essential entities of a theory's domain … and how these entities are related".

The Shape of the Model

Following an earlier paper (Storrs, op. cit.), we say that the most important concepts in the domain are those of participant, interaction, purpose and interface. Our top-level statement of the relationships between these entities is:

Any interaction takes place through one or more interfaces and involves two or more participants who each have one or more purposes for the interaction.

Having said, in general terms, what the conceptualisation is, we can now go deeper into its various aspects.

Participants

It is not enough to say that the participants of a human-computer interaction must be humans and computers. For a start, there is a very wide variety in both classes. For another thing, we must be aware of the rôles that each is playing.

Kinds of Participant

There are two important orthogonal classifications of potential participants in a human-computer interaction. Each have two sub-classes giving us a two-by-two matrix thus:

 

Natural

Artefactual

Autonomous

People

perhaps none

Automatic

perhaps none

Computers

A natural participant is one which was not built by any other agent. An artefactual participant is one which was built by another agent (and an agent is just a superclass of participant which includes all things with the power of agency). All artefactual participants therefore have creators who built them. Clearly, people are natural and computers are artefactual. An autonomous participant is one that can generate its own top-level purposes whereas an automatic participant is one which cannot. Again it is clear that people are autonomous while computers are automatic.

The definitions above seek to avoid metaphysical issues and concentrate on what is true for practical purposes. The assertions about categorisation also leave open the question about whether there can be automatic natural participants (people in a trance?) and autonomous artefactual agents (neural networks? AI programs?). It does not really matter if there can be or not. The important issue is the correct ascription of the attributes.

We must consider the resources and the capacities of our participants in three main areas.

Interface: We must be able to say what interface resources the participant has and the capacities of each interface.

Information: We must be able to characterise the information resources available to a participant and the participant's capacity for this information.

Processing: We must be able to talk about the computational or cognitive resources of a participant and its processing abilities and capacities.

This is just the beginning of a description of kinds of participant. One might say that the entire fields of psychology and computer science exist to take the descriptions further. However, this would not be useful to us. What we need to be able to do is identify those aspects of a participant that are important in the HCI domain and to bring those and only those into our model.

Participant Rôles

The participants in an interaction can take on a number of rôles. The rôle of a participant is a definition of a relationship it has to other agents.

Owner

Among agents, there is a directed ownership relation. One agent is the owner of another if that agent may set the other's purposes with respect to the interaction. All kinds of agent may own or be owned by all other kinds. We allow that social groupings (such as companies) may be thought of as artefactual agents. Thus a company may own staff and computers which are participants in interactions for purposes that the company sets. Multiple ownership is possible in principle. (Obviously, an artefact's owner is not necessarily the same agent as its creator.) We will return to the issue of social groupings later as they present a rather special case that deserves more discussion.

It is not the case that setting the purpose for an owned autonomous agent will guarantee that the agent will adopt that purpose. The ownership notion merely states that the owner is legitimated (by whatever social process applies) to set the purposes for certain interactions of the agents it owns. If the owned agent is capable of choosing whether to adopt these purposes, then it may choose not to—but may then have to face the consequences of breaking the rules.

User

A user is a participant to an interaction which is a person and which has an interface to an artefactual participant, that is not an organisation, through which the interaction is prosecuted.

This definition will exclude some situations where we might be tempted to call a person a user. For instance, the person buying an airline ticket through a travel agent who does the transaction through a computer terminal is not the user of the computer since their interaction is with the travel agent. The travel agent is the user and acts as an intermediary for the interaction between the buyer and the computer. An interesting (although unusual) case is a so-called Wizard-of-Oz simulation in which the requirements for a computer system are investigated by allowing a potential user to interact with what she or he believes to be a prototype of the proposed system but which is actually being driven, behind the scenes, by an investigator. Here both people are actually interacting with each other. Depending on the sophistication of the simulation, the 'computer' is either a simple channel for their communication or it is a third agent (or even a group of agents) acting as their intermediary. In this latter case, both people are users of the intermediary.

The definition deliberately excludes a situation where we would not normally think of the person involved as a user. This is where the artefactual participant involved is an organisation. However, it is interesting to reflect on the similarities of the two cases, the nature of the interfaces between people and organisations, the need for intermediaries, and so on. We can also regard many organisations as systems for processing information with the capability of contributing to a dialogue and thus 'computers' in the sense to be defined below. Nevertheless, we do not (at this stage) wish to broaden the scope of HCI to this extent.

Computer

"Computer" is another word that has so far been used very loosely in this paper (and is used very loosely elsewhere). One of the reasons is that it is very difficult to say quite what a computer is. Clearly, when we say "human-computer interaction" we do not want "computer" simply to mean the box of electrical and mechanical devices that constitute its physical manifestation. Our interaction with such an object would be very limited indeed. We do not, either, want it to mean the programs that the computer could run. These are dull and lifeless things without the computer to animate them—although we have come much closer to what we want. The meaning of "computer" here is more like "a program running on a computer".

Yet this is still very unsatisfactory as we have all kinds of devices that might possibly be included as computers—from abacuses to calculators through to human brains and beyond!—and the structure of a computer we could all agree was a computer would contain many components which themselves might be thought of as computers. We also have all kinds of thing that we might call a program—from the instructions on a tin of baked beans through knitting patterns to the physical world itself—and, even when we can agree that a thing is a program, its structure of layers of code, separable modules and library routines means that we cannot be clear about why we have said it is a program while its constituents might not be thought to be.

As often is the case in such difficulties, we need to step back. If we define a participant (of any sort) as being an agent whose behaviour as evidenced through one of its interfaces can contribute to a dialogue with a participant, we can use this to define "computer". We will say that a computer is an artefactual participant. Thus we avoid problems about distinguishing hardware and software, abacuses and brains, bean cans and accountancy packages. We also avoid any mention of particular technologies. This is good—and such notions can be brought in later if we find we need them.

Representative

An important rôle for a participant in our model is that of representative. A representative is a participant which acts on behalf of one or more other participants. Acting on behalf of a participant means an agent having purposes which cause it to act to further that participant's purposes. This could be because the representative is owned by the participant and has been given such purposes, because the representative is a drone of its owner, having no purposes of its own, or because the representative has these purposes for some other reason (e.g. it generated them itself). In all these cases, the representative may be acting for one or more other participants and may be acting on its owner's behalf or on the behalf of other participants.

There is a very special case where a representative is acting on behalf of one or more other participants with the purpose of increasing the effectiveness of an interaction in which they are engaged. Such a participant we will call an intermediary. It is an important case in HCI because it covers the notion of a separable "user interface" component. Conceptually, such a component is seen as an automatic artefactual intermediary facilitating an exchange between a person and another automatic artefactual participant.

Social Groups

It was said above that social groupings such as organisations can be thought of as artefactual agents, setting purposes for and owning other agents. This simple view of an organisation will not do. It has been frequently observed that the behaviour of the members of organisations is not simply determined by the explicit objectives and procedures of the organisations they belong to. A major source of the complexity of organisations arises from the fact that they are composed of autonomous agents who are able to set their own purposes. Thus the actual behaviour of an organisation and its members is the outcome of a complex dynamic interaction between all its autonomous members. The explicit purposes of an organisation may well reflect the purposes of its more powerful members (e.g. the senior management) but the extent to which they do, and the extent to which these are reflected in the actual behaviour of the organisation, are matters for empirical study.

Interaction

The notion of interaction is the central one to the whole field. We define an interaction to be an exchange of information between participants where each has the purpose of using the exchange to change the state of itself or of one or more others. Later we will recast this definition in terms of other concepts. First, let us look at some of the characteristics of an interaction.

Dimensions of Interaction

We can characterise an interaction along a number of "dimensions". These are independent of each other.

Synchronous vs Asynchronous

Participants may be interacting synchronously or asynchronously. That is, the elements of the interaction (see below) may be temporally structured or they may not. Clearly, there may be degrees and types of structuring. The study of linguistics has shed much light on the types of structuring that occur in unmediated human–human interactions (e.g. Tennant, 1979).

Direct vs Mediated

Participants may be co-located or they may be separated. There are two aspects to this. There is physical, spatial separation and there is functional separation. Physical separation is of very little interest to us—except where it leads to functional separation. Functional separation occurs when interaction takes place through one or more intermediaries. Where participants are interacting with no intermediaries, we call it direct interaction, otherwise we call it mediated interaction. As an example, for all the complexity of the telephone network, it acts as a simple channel. A telephone call is therefore unmediated. If, however, there was (say) a translation service available over the telephone, a conversation which made use of it would be mediated.

Co-operative vs Individual

The participants to an interaction may share their purposes for the interaction to a lesser or greater extent. Individual interaction refers to the case where a particular participant has no common purposes with others. Co-operative interaction occurs when two or more participants share purposes for an interaction. A question arises as to whether this sharing needs to be intentional and the answer must be "yes". The question of whether the participants are capable of intentionality (arguments by Dennett 1985 notwithstanding) does not arise as the chain of ownership can always be traced back to an autonomous and therefore intentional agent.

Cheap vs Expensive

This is a crucial aspect of the model. Each participant in the interaction is expending some resource for the sake of achieving the desired state change. Different interactions may require more or less of these resources to achieve the same results. To the extent that they do, we can think of them as more or less expensive. The cost of an interaction can be measured relative to an individual participant or to the interacting group as a whole and it may be of value to weight costs according to the importance of the participant if a particular theory requires it. We may also consider resources which are external to the participants, some of which may be shared, and the costs of utilising these.

Costs are a very important concept in multi-party interaction and some authors identify the unequal distribution of the costs and benefits of participation among the participants as being an important factor in the acceptability of group support systems (Grudin, 88). Benefits relate to the meeting of purposes, costs to the expenditure of resources. Purposes will occasionally conflict and resources may be unevenly distributed and differently valued by participants. The management of cost/benefit conflicts within multi-party interactions is thus a serious practical problem and the conceptual and theoretical mechanisms for dealing with them need to be in place.

Elements of Interaction

Interactions are complex but we can model them as being constructed from simpler components. The ones we find useful are the following.

Utterance

An utterance is the unit of the emission of information by a participant. Utterances may be addressed to one or more participant or they may be broadcast to all participants. There is much work to do in the characterisation of utterances that must be left for a later paper.

Dialogue

A dialogue is a pattern of exchanges of utterances between participants. The nature of these patterns—how they are constrained, how they are generated, how they are tracked, and so on—is a central area of empirical study and theoretical development for HCI.

Interaction

An interaction, we can now say, is a dialogue for the purpose of modifying the state of one or more participants.

Interfaces

A participant emits utterances and receives them through its interface. Each participant has only one interface which consists of a (non-empty) set of interface channels. Participants will typically have more than one channel and will, especially in the case of people, use more than one channel at the same time, perhaps for multiple concurrent interactions. A channel is not to be confused with a 'modality' (e.g. a display of a particular type or a particular human sense) as the same modality may support more than one channel simultaneously (as described in Neisser, 1976). Also, despite the connotations of the word, a channel is an arbitrarily complex system for transforming information. An output channel transforms information from some internal form into an externally available form. An input channel transforms information from some externally available form to an internal form.

We are left with a difficult problem about what is internal and what is external to a participant and it is a problem that must be solved on a class-by-class or even an individual basis. The difficulty goes to the heart of what we call a participant.

Consider a typical 4-box microcomputer (processor, screen, keyboard and mouse). I could say that its interface consists of the screen, the keyboard and the mouse (forgetting it's floppy disc drive, its printer port, etc.). These are the channels by which it inputs and outputs information. However, I could say, instead, that the keyboard socket, the mouse socket and the screen socket are the real interface channels and that the keyboard, mouse and screen are intermediary participants which exist to help the user interact with the processor. We could, perhaps, go further and unpick the innards of these various intermediaries to find that they have communicating sub-components that may also be classed as participants. We could probably continue to do this indefinitely.

However, this is not what we want. It may indeed be useful at some future date to model HCI at the quantum level but for now, we would normally be more comfortable thinking of a screen as a channel and as a part of a participant rather than as a participant in its own right. So how do we stop our definition of a participant from regressing?

There are two answers to this. One is to say that, although it is possible to draw the line at any level, there will be levels that are appropriate to HCI and heuristics for determining them (wherever the user perceives the line, for instance). As modellers, we must apply these heuristics and draw the line. The other answer is that we do not need to. We can regard any co-operative grouping of agents as an agent in its own right. Thus we can choose a level of aggregation to suit our purposes.

Physical Resources

The conceptualisation of interaction being developed here is concerned with the purposive exchange of information between participants but it is also true that participants take in information from their environment, sometimes deliberately and sometimes not, and sometimes from non-agents and sometimes from other agents. For instance, cooperation may be mediated through the use of shared artefacts which may themselves be inanimate (e.g. a schematic used in the course of collaborative problem-solving). Because the present conceptualisation does not directly treat this kind of information use by agents, it is not to be supposed that it is thought unimportant. Indeed, an important class of interactions is that by which participants orient themselves to common artefacts and share their cognitions about them. Among such interactions are special cases of particular interest such as where participants monitor the activities of other agents—as in the 'peripheral' monitoring of one controller by another in Heath and Luff's description of group working in a London Underground control room (Heath and Luff, 1991) Another case of participants using shared physical resources which is especially important is where the resource is used as a communication channel or even as an intermediary. This is discussed further below.

Purposes

The important thing to understand about the idea of purpose as we are using it is that it is in the eye of the beholder. More accurately, purposes are conferred upon participants by themselves (if they are capable) and by their owners. The participants to an interaction are engaged for the purpose of changing their own or one or more others' states. To achieve this purpose, participants must make their interaction intelligible, comprehensible and morphogenic. These top-level interaction goals were defined in Storrs (op. cit.). I briefly redescribe them here.

Intelligibility

An utterance must be intelligible to the participant to whom it is addressed. This may be thought of as dealing with the ergonomics of the interaction.

Comprehensibility

An utterance and the dialogue in which it is placed must be comprehensible to the participant to whom it is addressed.

Morphogenicity

An interaction must serve to change the state of one or more of its participants. More than anything, it is the success or failure of the interaction to achieve the desired state change that influences our judgement of its quality.

How Good Is the Conceptualisation?

The model, in its current state of development has a number of strong points and a number of weaknesses. In its favour, we can say that:

• It exists. This is important as it is one of the very few attempts to produce such a model.

• It is internally consistent. I see no benefit in attempting to do so at this stage but it seems quite feasible that the model could be described in a formal notation when it has become more stable. At which time, its internal consistency could be more confidently claimed.

• It is relatively precise. There is no way that a conceptual model can ever be completely precise as concepts themselves can never be completely precise. However, it is precise by the standards of precision in the field.

• It covers almost all of the concepts that would generally be thought to be important in the field. The main omissions are discussed below.

• It has been constructed with a view to the development of theories in HCI but also with an eye on theories that can make quantifiable predictions.

Payoffs From The Model

There are two questions we might want to ask of a new conceptualisation such as this one: 'Can existing theories be recast in these terms?' and 'Does it provide us with any new insights?' Neither is easy to answer.

Recasting existing theories is by no means a straightforward mechanical process. Since the conceptualisation presented here uses terms which other views of the domain also use but in different ways, there is a serious problem of how to compare the meanings of similar-sounding statements. In fact, it would be more straightforward to completely re-describe the phenomena of interest with this new vocabulary. There is also the problem of the scope of the various theories that already exist.

Norman's execution-evaluation cycle, for instance (Norman, 1986) is more a theory of individual human psychology than a theory of interaction. Nevertheless, it tells us something about the cognitive structure and dynamics of a participant engaging in a dialogue. One can imagine how two or more such execution-evaluation cycles might interact by 'executing' utterances and 'perceiving' their effects on the states of the other participants. I believe that regarding the 'system' in Norman's model (the part 'below' the 'physical interface') as another participant considerably enriches our understanding. Indeed, if we consider the 'physical interface' itself as a participant's interface (i.e. a set of channels) we can see how to extend the theory to include various communication 'modes' and multi-party interaction and to accommodate the other constructs of this conceptualisation. Norman's theory gives us places to locate some of the concepts developed here such as purpose ('goal') and information resource ('user's model of the system'). The path from 'perception' to 'interpretation' to 'evaluation' in the execution-evaluation cycle gives us a way of thinking about why the intelligibility, comprehensibility and morphogenicity goals for a communication need to exist and how they effect the receiving participant.

Finding new insights from the conceptualisation is perhaps easier to do. In an earlier paper (Storrs, 1989), I discussed how this conceptualisation reflects on the notion of a separable user interface component and was able to conclude that, while there may be engineering and cost reasons that make such a thing desirable, the use of what is effectively an intermediary to support an interaction will generally lead to sub-optimal goal satisfaction.

Robinson (1993) introduces the notion of a 'common artefact', arguing that the unpredictability of tool use is structural in group work and that work is therefore best supported by tools which have properties which enable flexible and unanticipated use. An example he gives of a common artefact is the peg-board for keys found in some hotels. This serves a number of functions for a number of people without imposing task sequences. If we think of a common artefact as being a communication channel, i.e. part of the interface between participants, we can see that it would have the characteristics that Robinson says it needs. It should be predictable (the channel's structure or behaviour should be sufficiently well-understood by the participants), allow peripheral awareness by all users (messages on the channel should be intelligible to the participants), will support it's own implicit conventionalised communication while still allowing more explicit forms of communication (the nature of a channel will constrain the lexicon and the structure of the communications it permits but will not necessarily interfere with communications over other channels), and will provide an overview .

Controversial Aspects of the Model

The controversial aspects of the model are manifold and apparent. However, I will focus on just a few of the main ones.

Tasks

Firstly, the model makes no explicit mention of the notion of a task (or job or other organisation of activity). This may appear strange since it is a major concern of many workers in the field and would certainly be thought by most to be a key concept (cf Carroll's ontology; Carroll, 1989). However, the absence of task as a concept is only apparent—it is there but hidden somewhere in between the concepts of purpose and agency. My contention is that people, like other agents, act to achieve their purposes. This means that they are constantly engaged in goal-driven, sometimes planned behaviour. For some of this behaviour, the plan, or the goal structure is fixed to a greater or lesser degree and this kind of behaviour is what we normally call tasks. Some of this task-related behaviour will involve interactions and some of these interactions may involve computers. My own feeling is that this puts tasks in their proper place and gives them an appropriate level of significance. I feel that a model that takes "task" as a central concept is doomed to an impoverished or lop-sided model of participant behaviour.

User Models

The next main omission is that the model makes no provision for modelling a participant's model of another participant. Again, this is something which practicioners in HCI put great store by. There are, essentially, two reasons why such models are important to the participants of an interaction. The first is so that one participant can better influence another (morphogenicity) and the second is so that one participant can better understand the other (comprehension and intelligibility). Participant modelling is also important for the designer of interactions, tasks and artefactual participants. This is because designers can take advantage of the participants' existing participant models to help increase the effectiveness or efficiency of the interaction. In the conceptual model presented here, a participant's model of another participant would be modelled as one of its information resources. The abilities of different types of participant to model others and the properties of the models they can produce or already have available are subjects to be treated within future theories and the conceptualisations that already exist for models in general should be adequate to describing these in particular.

The Designer

Designers and the act of design enjoy considerable interest in the HCI and CSCW literatures. The reason appears to be this. One path that has been identified to building usable interactions is to design systems which present themselves in ways which match or help the user to form a useful mental model of how to interact with those systems. This is likely to be hampered if the designer uses her or his own mental models as a guide to the design but is likely to be assisted if the designer uses the future user's mental models as a guide (e.g. Norman, 1986). Thus, for the practice of HCI design, it is important to understand how to acquire and use users' mental models and to avoid using designers' ones. For the theory of multi-party interaction, it is only important to know how participants will model each other and how their models of each other will influence their interactions. How a designer makes use of this knowledge is part of the engineering disciplines of system building. The designer as such is not an important concept for understanding interaction.

The Domain

Another major omission and the area in which this ontology diverges most radically from that of Dowell and Long (1989; Long and Dowell, 1989) is that I do not propose the modelling of the "domain". By domain, is meant the world that the group interaction operates on or monitors. Consider the case of an air traffic control system. Here, a variety of participants interact co-operatively to monitor and control the movements of aircraft. It is argued that to be able to talk about the effectiveness of such a group interaction, the domain of air traffic must be modelled so that the goals of the interaction can be stated (e.g. the optimisation of aircraft throughput and aircraft safety) and achievement of them assessed. I would certainly not dispute this. It is clearly true for a work system as a whole. Yet there are several issues here about the scoping of work systems, of the group being modelled and even of our attitude to the 'external' world.

Consider this last first. If the rate of ascent of a particular aircraft type is important knowledge for making decisions about how to instruct the aircraft to change levels, should the rate of ascent of aircraft types be modelled as part of the domain or as part of a participant's information resources, or both? The answer would seem to depend on which participant in a group is being considered and what that participant's rôle is with respect to the world and the group. Most participants in an air traffic control work group will have no direct sensory contact with aircraft. The information they receive will be from computer systems some of which may eventually be linked to sensors such as radar and others which link via other intermediaries to other sources of information such as flight plans. People who work in these groups have detailed and elaborate models of the air traffic domain but they are—however closely they may correspond to 'objective reality'—their own conceptual models. There will be individual differences in perspective, emphasis and accuracy. It is the models which the participants use which determine what are the objects, relations, attributes and values that are important in a domain. Thus rate of ascent is important just because air traffic controllers manipulate flight levels to keep aircraft safely separated. Having decided (by asking users or system designers) what is important about the world and how it should be analysed, it is then possible to compare the characteristics of participants' models with the characteristics of an 'objective' model of the world. Thus we can check that a controller's beliefs about rates of ascent correspond to the actual rates of ascent for aircraft types.

The point of this is that the primary modelling of the domain is that which is done by the participants in the work group. A group taken as a whole may have many different models within it of the 'external' domain and, considered as a single entity, the group will have a model corresponding to the conflation of the models of its constituent participants. The model of the domain of primary interest to us as theoreticians is therefore that which is part of the information resources of participants. Nevertheless, we must also be interested in modelling the 'real' world where this information is necessary to assess the success with which participants (at whatever level of aggregation) achieve their purposes.

The Environment

Finally, the "environment" is missing from the conceptual model. By this, I mean the heating, lighting, noise-levels, interruptions, social contacts, desk height, and so-on that constitute the world in which the participants operate. Generally, the environment is considered as a source of signal degradation and stress for the user. Its influence in this rôle is therefore on the various resources of the participants (the interface, cognitive resources and, perhaps information resources). To bring the environment into the model thus requires that the notion be introduced and then related to a slightly elaborated model of a participant. This would probably be a useful addition but one which must be made elsewhere.

The environment can also be thought of as the "situation" or circumstances which influence people's actions on a moment-by-moment basis. The recognition of the situatedness of action is a recognition that people use a cognitive strategy which constantly assesses the appropriateness of action in the current context. Such a strategy fits into the conceptualisation as part of an agent's (cognitive) resources. Again, it would be useful to bring such notions into the present model but the work must wait until a later date.

Concluding Remarks

The conceptualisation presented here has developed significantly from that presented in my earlier paper (Storrs, op. cit.). Almost all of this development takes the form of extensions and elaborations and sharpenings—filling in the details rather than re-modelling. Beyond this paper there is yet more work to extend and elaborate and sharpen the model. My own feeling is that it is moving close to the point where it can usefully be employed in talking about the subject. Once the model is used to conceptualise the field, questions posed in term of the model will lead to hypotheses and ultimately to theories. It is only then that the adequacy of the model will truly be known.

 

References

Carroll, J.M.. Infinite Detail and Emulation in an Ontologically Minimized HCI. IBM Research Report RC 15324 (#67108) 10/12/89 IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, 1989.

Carroll, J.M. and Campbell, R.L.. Artefacts as Psychological Theories: The Case of Human-Computer Interaction. IBM Research Report RC 13454 (#60225) 1/26/88 IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, 1988.

Dennett, D.C.. Brainstorms. Harvester Press, 1985.

Dowell, J. and Long, J.. Towards a conception for an engineering discipline of human factors. Ergonomics, 1989, 32, 1513-1535.

Foley, J. D. and Van Dam, A.. Fundamentals of Interactive Computer Graphics. Englewood Cliffs NJ: Prentice-Hall, 1982.

Giddens, A. The Constitution of Society. Polity Press, Cambridge, 1984.

Grudin, J.. Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces. Proc. CSCW '88, Portland, Oregon, 1988, 95-93.

Harré, R.. Laws of Nature. Duckworth, London, 1993.

Heath, C.C. and Luff, P.. Collaborative Activity and Technological Design: Task Co-ordination in London Underground Control Rooms. Proc. E-CSCW '91, 1991, 65-80.

Kuhn, T.S.. The Structure of Scientific Revolutions. University of Chicago Press, Chicago. 1962.

Lakatos, Imre. Proofs and Refutations. Cambridge University Press, Cambridge. 1976.

Long, J.B. and Dowell, J.. Conceptions of the discipline of HCI: craft, applied science and engineering. In A. Sutcliffe and L. Macaulay (eds). Proceedings of the Fifth Conference of the BCS HCI SG. Cambridge University Press, Cambridge. 1989.

Neisser, U.. Cognition and Reality: Principles and Implications of Cognitive Psychology. W.H.Freeman & Co., San Fransisco, 1976.

Norman, D.A.. Cognitive Engineering. In D.A. Norman and S.W Draper (eds) User-Centred System Design: New Perspectives on Human-Computer Interaction. Lawrence Erlbaum Associates, 1986, 31-65.

Popper, K. The Logic of Scientific Discovery. London: Hutchinson, 1959.

Quine, W.V.. Theories and Things. The Belknap Press of the Harvard University Press, Cambridge, Massachusetts, 1981.

Rogers, C.R.. Client-Centred Therapy: Its Current Practice, Implications and Theory. Constable, London, 1951.

Storrs, G.. A Conceptual Model of Human-Computer Interaction? Behaviour and Information Technology, 1989, 8(5), 323-334.

Tennant, H.. Experience with the evaluation of natural language question-answerers. Proc. IJCAI, 1979, 874-876.

 

 

Home Email Index

This material is subject to copyright and any unauthorised use, copying or mirroring is prohibited.

1