A classical example of paradigms and of paradigm shift is given by mechanics. During the Newtonian paradigm, speed was conceived as having no limits and the mass of a body was supposed not to change with a change of the speed of the body. Then there was a paradigm shift and that model of thought was abandoned: the relativistic, or Einstenian, mechanics became the new model of thought in mechanics. In relativistic mechanics, the speed of a body can't be higher than the light speed, and a body mass grows with the increase of its speed.
In programming, the word paradigm shift is used
to mean the change in the way the programmers community conceives and creates
programs. It is often said that structured programming (SP) was a paradigm
most programmers used till the beginning of the 80's. In SP, the emphasis
is in functions --- or fragments of code, subroutines
etc. --- and in the top-down decomposition of a program.
Then in the end of the 80's and the beginning of the 90's, there was a
paradigm shift and programmers begun to thing of programs in terms
of objects, aggregates of functions and data (see
What's in an object
below). Then the bottom-up approach became
important as lots of libraries were available and people were interested
in using them (see
What's reuse below).
Ideally, in a paradigm shift, the new paradigm replaces
completely the old one. That was what happened in the transition between
the Aristotelic mechanics --- all qualitative, numberless --- and Newtonian
mechanics --- in certain ways a branch of analytical geometry.
However, that didn't happen in the transition from Newtonian and Relativistic mechanics: there are plenty of people using Newtonian mechanics today, and indeed some people will never have to learn Relativistic mechanics. Only people that deal with speeds close to the light's speed and huge distances are forced to learn it.
Since Newtonian mechanics is far simpler and more intuitive than Relativistic mechanics, people tend to use Relativity only when they're really forced to do it. So mechanic theories are more like tools: they use it when there's a problem that need it, much like we use a hammer to fix a nail, and a knife to cut something. In the same way a scientist will use Newtonian mechanics for situations closer to day to day experience and Relativistica mechanics to speeds close to the the light and astronomical distances.
The same happens with structured programming and object-oriented
programming: they must also be seen as tools. Some problems are are better
solved with SP techniques, while others are better solved by OOP. In
When
to use Objects the two types of situations are outlined.
In object-oriented programming, an object is more than just data, more than just a couple of functions. Following the example of Bertrand Meyer, let's consider a common radio. It has a state and some means to alter and to consult this state. It state is, of course, the radio station it is dialed in, its sound volume and sintony. It has buttons to alter any of these dimensions of state, and usually a normal radio has some form of displaying the radio station frequency and sometimes a display of sound volume.
Usually, a normal radio has also a way to turn it on and off, often the same button, sometimes the volume button.
Now it is necessary to translate the radio elements to
an object-oriented nomenclature. Since OOP isn't exclusive of radios, the
translation of radio elements to more general ones is a means to see how
OOP can be used to solve other problems. Table 1 below shows a mapping
between the simple radio elements and the more general OOP terms.
Radio | OOP |
---|---|
On/Off | Constructor/Destructor |
Display elements | Accessors |
Volume and sintony dials | Transformer |
Station frequency, sound volume and sintony | State |
The on and off functions of a radio are named constructor
and destructor because they are used to make the radio "come alive" and
to make it inactive. Of course, the if you press the off button, or in
general turn off the radio, it won't disappear, but will cease any
activity that it is originally intended to have. In the same sense, a
destructor won't cause the program line where the object was
defined to disappear, but the corresponding variable won't be useful for
doing any radio simulation: if you want the variable to be useful again,
we'll have to call the constructor first.
He meant to say that to conceive classes all we should do is to mirror the objects in the application domain. That is, if we are going to create an application to deal with accounts payable, a good candidate for a class is simply the account. If we are going to create a GUI (Graphical User Interface), with windows and buttons, then windows and buttons are good candidates to being classes. So, in the accounts payable application there would be an ACCOUNT, and in the application with GUI there would be classes WINDOW and BUTTON. And as much instances of the class --- or objects --- as needed to solve the problem.
That may sound a good idea to some of us, but this is not what authors Coad and Yourdon think. In their book Object-Oriented Analysis, they say about Meyer's idea: "Nuts!"
Such a strong word is very rare in the literature about object-orientation, and it surprised me then as much as it surprises you now. Coad and Yourdon state that the conception of classes in an application should be the result of a careful process of object-oriented analysis and design.
Ferocious debates like these are typical of the initial phases of all fields of technique and science. As more experiences are gathered, people tend to converge at intermediary positions and to forget the extremism of their own initial ideas.
An intermediary position can be that the conception of classes should be neither a faithful mirroring of the application domain, nor its complete neglect. Certainly, most of the object-orientation appeal lies in the possibility of mimicking some application domain behaviors in the software architecture level. However, an application is an application is an application: the purpose of an application is not to mimick faithfully the application domain, but to reproduce some of the most useful behaviors of the problem. Not all behaviors, not even all useful behaviors The development of an application usually is a negotiation between user and developer: some features are not implemented due to time or computer limitations, while other ones are added since their cost is small, since they're closer to computer nature, or to the software tools used to implement the application.
The fact is that the software development process is not done in the application domain world, but in another world: the world of the computer and the software tools (programming languages, compilers, libraries etc.) used to develop the application. An accounts payable is an application that will help solve accounts payable problem, not a form of accounts payable. Even if such an application is created by an accountist, it is still some form of software development, not a form of accounting practices. The world of software development has its own laws, that must be followed if we want that the software being developed be a good one.
So, going back to the question " how can we conceive classes
and objects, given an application domain?", a good idea seems to try to
dive deeply into the application domain in the first steps of the software
development process, and try to capture as much of the application domain
as we can. Then sit back and try to fit this application domain's conceptual
architecture into a good software's architecture.
An application programmer that creates an accounts payable software will receive only this minimal help from it. For this programmer, a piece of software more directly related to the accounts payble problem would be more useful. For instance, some software library that had functions to do the most common problems of accounts payable processing.
Since that library would "know" more about the problem
of accounts payable, some say the library is more intelligent. The
extreme limit of this would be a monolithic function, say AccountsPayable(),
that would care about all and every aspect of the problem. So, your C++
program would be only:
int main ( int argc, char** argv); int AccountsPayable ( int argc, char** argv ); int main ( int argc, char** argv) { return (AccountsPayable (int argc, char** argv)); }
Very good, isn't it? An application programmer would have only to write a few lines of code and everything would be done.
Very good, but very limited, for no matter how configurable AccountsPayable() can be command line parameters or setup files, it will have a limited flexibility. Even with configuration parameters, only a limited number of forms of doing accounts payable will be possible through AccountsPayable(): there's a strong possibility that some particular form is missing, even if it is minor one.
However, when we develop a specific application in source, it is usually because the existing applications in compiled form don't do what we want, and so you need maximum flexibility, and that can't be achieved with a solution as monolithic as this one. And the single function approach is as flexible as a closed application. This is the expression of a programming saying:
In this application, if we need to change something in AccountsPayable(), that couldn't be changed by its parameters, we'll have to change AccountsPayable(), source code. In that case, of course, we won't be reusing it. A better idea is to have pieces of code not as minimal as printf(), but not as maximal as our AccountsPayable(). That relative size is called granularity: the smaller a piece of code, the lower is its granularity; and vice-versa, the larger a piece of code, the higher its granularity.
The golden goal of reuse is to maximize reuse without minimizing flexibility. Unfortunately, with any reuse we lose some flexibility: maybe a menu can't be changed, maybe some file format will not contain the every information we we want etc. But then we'll save a great deal of programming time, as some features of application programming will be already implemented. This balance is very important, as it limits te usefulness of the reused elements. If the package can't do, or what's worse, it won't allow us to do, something we consider very important, or if it does it too slowly, then it will be useless for us, at least for a particular application.
If that's the case, we'll have to find another package
to reuse. Or maybe to develop our own package, using elements like printf(),
that have minimal granularity, but maximal flexibility.
The idea of encapsulation is that the data (state) of the object we try to reuse will be isolated from other parts of our software. Since the objects to reuse are insulated from the exterior, we can use them as small building blocks of our software, and we can be sure that unexpected side effects will be minimized.
The word module is the programming name for such building blocks. Programing done with the use of modules is called modular programming, and the first forms of modular programming predate OOP. For instance, languages as C can do a good job at modular programming. However, OOP and the object-oriented programming languages (OOPL) express more clearly the module concept. Sometimes, when using a modular language without object-oriented features, it is not so easy to see during the creation of a module what is to share and what to isolate from other modules.
The hard to understand FORTRAN IV programs and their evil
COMMON blocks that I had to painfully debug are an eloquent
witness of this. (Click
here
for a brief note on the FORTRAN IV programming language and its
COMMON blocks.)
The idea of an encompassing term is possible and sensible since these types of technologies are closely related. In very gross terms, one can say they differ only in the level of detail they approach the problem of software development. One can say OOA is done in the macroscopic level, without caring too much with the implementation details, and that OOP is done in the microscopic level, not only close to the implementation: it is the implementation technique. OOD is the gateway between the two: we can say it is done in the mesoscopic level.
That similarity between the three levels of object-oriented software development seems to be obvious but it is not: actually it is due the the fact that the notion of object is a unifying concept. Previous methods, like structured analysis (SA), design (SD) and programming (SP). To know a lot about SP wouldn't help very much in learning SA or SD.
So, the idea of object is just one more Columbus' egg: it looks very easy, and even very obvious, when we see it done. However, to get there was difficult and non-obvious. Like the discovery of the American continent by Cristopher Columbus.
To present the Object Technologies (OT), we'll use an
analogy with the several stages of a building construction. The idea of
a building isn't casual: if all we're going to do is to repair broken tiles
in the kitchen, only masons would be necessary. In the same way, in a very
small application, no analysis or design would be strictly necessary.
In terms of software development, this phase tries to arrive at an specification, that informs what characteristics the building should have in terms of architectural style, number of floors, and of course, budget available and expected deadline.
When this specification phase is done, the architect goes back to the office to detail the project a little more. Then he gets back to the client, for the aprovation of the building plans. The project is eventually aproved, after some minor modifications.
Then the architect details the project a little more, this time to ask a civil engineer to design the building physical structure.
The dialog between architect and engineer will be much more technical than the dialog had between architect and client. And a seasoned architect will actually conceive the building with an eye in its physical structure, to make easier the engineer's work.
When the engineer begins working in the project, it becomes more concrete and detailed. That's a consequence of the engineer's work: not only to conceive in abstract terms, but to determine the physical ways of making the project feasible.
In the software side of our metaphor, there is also a thinking in general terms, and indeed people talk about architectures; of course, software architectures. It means how the larger software units will be related to each other. However, the person that conceives a software architecture isn't usually called an architect, but an analyst. The idea here is that a software analyst is a person that understands a system that already works outside the software world. If there's not an existing system, people usually try to create in software an ideal system, respecting of course the limits of time and budget available for the software development.
In any case, the systems analyst must try to understand how the real or ideal system works, and this is done by breaking the system in its larger unities, and understanding the relationships they have to each other. The analysis phase then is brought to a finer level, and the analyst asks more detailed questions to the user. This process is repeated till the software analyst has a clear enough picture of the system. Then begins a synthesis phase, when the analyst will propose to the client his understanding of system. The analyst will present a new system, equivalent to the first one in general terms, but conceived with a view for software implementation.
For instance, if in some part of the original system a routine supposes the intelligence or creativity of a human being, then in the new system the software analyst should substitute it for another, simpler routine, that even a computer is able to execute.
That proposal of a software system will documented in texts, diagrams, and even program fragments called a prototype, that will have only some screens, and no heavy code behind them. It i as if a prototype was a movie scenario, in which houses are representated by their façades only. Usually, in this step the client has the equivalent of a croquis in architecture, and he's able to think more about the project. So, the analogy of software development with a building construction becomes stronger, and here as well as there, the user is able to discuss with the software analyst the details of what he expects the software to be. Here, as well as there, the experience of the software analyst will help the client think about every aspect of the software. The software analyst will hear what the client has to say about the building, as a means to inspire the general terms of the software. Even in these general terms, the software analyst will be able to help the client think about the software project, informing why some ideas must be rejected because they're unfeasible, suggesting more pragmatic or cheap alternatives, and will change his or her points of view if the client is able to present better arguments.
When they finally discuss all details of the software project in these general terms, the software analyst goes back to the office and prepares a formal proposal. This proposal will be very detailed and will contain a formal statemente about the software purposes, called a software specification, or just a specification. Besides the specification, this proposal will inform about the software deadlines and, of course, its cost.
That specification is commented by client and analyst, but mostly in terms that imply in cost and deadline: the technical side should be already stable. Usually, the software analyst will present several alternatives, and the client will chose between them.
The larger unities of the system will be translated as clases and objects, and the software analyst will be able to enrich the client specification with enough information that will ease the software designer work. For instance, when in the specification there's something like "the dept. X delivers information i to dept. Y", it will be translated as something like "the object X, of class Dept, will write the information i in database d, so that the it can be read by object Y, also of class Dept."
It is important to note in a building construction, sometimes the architect can forget about the engineering conveniences when creating the building form, in the software development, the systems analyst conducts all of the analysis in the light of a computer implementation. It is not that the analyst will also conduct the implementation together with the analysis, and it is not that he will talk to the client in very detailed terms of a computer program. No, the talks between client and analyst will be done in generic, non-techical terms. However, the analyst shouldn't forget in any moment, that the program will eventually be run in a computer system that is -- we all know it -- not very bright or creative.
So, the analyst role will be conducted with an eye towards
making easier the software implementation, remembering the software will
be run in a computer system, as well as making the implementation easier
for the techniques the software designer will use.
Back to page Free C++ Help.
Back to page On the C++ Programming Language.
Back to main page.
These pages were accessed times since 24Sept 1998.