Final reviews explore usage patterns which indicate measures of performance.
From these reviews testing is extended to include volume, stress and bench
testing. This style of testing shakes out each software release and forms
a foundation to measure the software from release to release.
Populate a use case diagram while asking key questions to identify
the users of the software and the roles they play as Actors. An
Actor can also be an external system.
Some systems are driven by events so events may be the initiators of
use cases.
Check a book on UML standards for use case diagramming information.
The focus of this review is:
o Follow through each role to check it.
o Do I have the essential parts?
o Am I missing any parts?
Also:
o Keep use case words in User Language.
o Identify user needs in general terms.
o Identify "extends" and "uses" relationships.
Use the use case diagrams as a check point for the use cases. As you
walk through each use case scenario by finger pointing, check off each
touched use case. Complicated or unclear scenarios should be noted as
action items for the next review.
See
Use Case Workshops
.
The focus of this review is:
o Check each use case against the Actor roles.
o Do I have the essential parts?
o Am I missing any parts?
o Are the user scenarios understood?
Also:
o Keep use case words in User Language.
o Identify user needs in general terms.
o Identify frequency, volatility, errors.
Develop a network of dialog maps. Use the scenarios, if needed,
to walk through each dialog map. Check off every function on each
dialog map.
The focus of this review is:
o Is there a use case for each dialog map function?
o Is this an acceptable look and feel for dialog maps?
Also:
o Identify scenarios that need documentation.
o Identify missing use cases.
A series of small sessions will identify rules and logic which
define the logic and rules of how things work ... as opposed to
the work or process flow.
A final review of these business rules with the group may help
if the information is hidden in the minds of the whole group.
Often a prototype has been decided upon and is in the works
as a proof of concept by this point. This sets the stage for
implementation of a more robust model.
This review focus is to make an agreement of what constitutes
a release. The goal should be a minimal set of use cases to
implement to constitute a system. From that base there can
be incremental releases which iterate on this design to
encourage stepwise refinement.
The focus of this review is:
o What is the minimal set of things to implement?
o What determines an acceptable release?
Also:
o Identify sanity tests.
o Identify feature / function tests.
o Identify exception tests.
See
Responsibility Driven Design Walkthrough
as one example of a
design review.
The focus is:
o Does this design work? (catch errors early)
o Is there a better way?
o Does performance match the frequency of usage?
Also:
o Identify the risks of the design.
o Break out the estimated completion.
This review sets the tone for measurability across each release.
This may happen after an actual first release when performance
is better understood.
The focus of this review is:
o What limits need to be tested?
o What volume should be applied to those limits?
o What stress should applied to those limits?
o What fundamental bench measures this application?
o What benchmark can apply across releases?
Any project which winds down from a release can benefit from looking back
with all members and asking fundamental questions. Starting from a time
line of the project, all members will have a perspective of what went
right and what went wrong. This course correction is a healthly beginning
into each new incremental release.
See
Post Mortem Agenda
.
David Scott David_Scott@Sequana.com
October 1997