pm_ucdm_mar93
re: ucdm Post Mortem
who: Rory Bolt, Vy Nguyen, Dave Smith, David Scott, Peter Lawthers
date: 29 March 1993
1- What went right about the project?
85-95 percent of the product is functional.
The customer absorbed much of the cost of development.
Good learning experience w/ more knowledge on FSS.
Team experienced pressure and built trust, communication and
a sharing of ideas.
Technical knowledge gained on kernal, caching, purging and
migration.
Scheduling knowledge built.
Closer to test suite & stone.
2- What went wrong about the project?
Lack of machine resources in development, test & integration.
Dependence on NSL unitree before NSL unitree was production ready.
Lack of cooperation w/ IBM group for NSL unitree issues.
No code freeze. Unit tests did not exist (or other tests) to help
flesh out problems earlier.
Testing needed to include metrics; performance analysis, stress and
and more tests.
Up front design time needed more than one person trained in the
kernal. Access was needed to SGI kernal.
Suffered from creeping featurism.
Customer driven schedules, and reliance on outside entities for
resources (SGI, CRAY, NSL UNITREE).
3- What can we do better next time?
Do more design up front.
Get the source to the kernal for locking issues.
Unit testing.
Better resouces for development / test cycle.
Avoid creeping featurism.
Be cautious of customer driven schedules, and reliance on outside
entities for resources.
Stabalize one platform before attempting a port.
Software schedule
Was it realistic?
No. And the schedule was not well defined because the
requirements changed with experience.
The requirements were driven outside of development and the
dates were artificially imposed by the Grummand project. The
schedule was alternately shortened form 6 months to 3 months
and then changed from a sun port to an SGI & CRAY port.
Did it include deliverables that were met?
Most of the deliverables regarding computer equipment to develop
and test were more than halved. Computer time on the Cray was
scarce and over the network.
Codeing and test were delivered late.
Were we tracking the right things?
In some cases yes. But the effort to clean up the project was
much too late in its cycle to be effective.
Were all resources tracked? No. People were scheduled.
Software requirements?
Software design?
Software testing?
Integration testing?
Documentation? Good effort that was scheduled. We need to
get a technical lead to review changes before they are
actually changed in the documentation to help prevent
errors.
Logistics? Testing was often far away or across the network. Availability
was often late at night.
Equipment?
Source & Configuration Management?
Often files were lost in sccs and had to be recovered. This seemed
to be a constant headache.
Availability of resources? Poor.
Overtime? The team ended in an exhausted state and still had the sense
of much to do to really complete the product. Testing had to be halted
to allow the development to proceed due to lack of a testing / develop-
ment environments.
Redesign? Having to re-sync the files upon recovery was a surprize. There
were lots of little gotchas found once testing was in place. Unit testing
would have caught much of that.
Other?