Object Oriented Requirements Determination (OORD)

Part 1: Motivation and Fundamental Issues

This is the first article in the series. The completed set is as follows:
Part 1: Motivation and Fundamental Issues
Part 2: Creating Requirements Documents
Part 3: Mapping Requirements to OOA/OOD
Part 4: Test Case (Home Heating System)

Summary

This article is one in a series whose aim is to develop a number of techniques for the elicitation, expression and validation of requirements for problems which are solved using object-oriented technology. In particular, we are concerned with the creation of requirements documents which can be used as a front-end to the OMT and Unified methods.
This article is concerned with a number of important issues and an initial motivation for giving more attention to requirements determination before embarking on an object-oriented analysis of a given problem.

 

Status report on Object-Oriented Analysis and Design

First generation object oriented methodologies appeared in the early 1990's. Some have disappeared in the meantime, while others have undergone change and improvements. For example, OMT has been replaced by the Unified Method, in which the efforts of James Rumbaugh and Grady Booch have been merged. Despite the many improvements in this field the author claims that more work needs to be done in the area of capturing requirements before we start on an object-oriented analysis of a given problem. The fact that little attention has been paid to requirements has lead to a number of difficulties when developing systems using OMT. We suspect that the same problems have been encountered with other methods (for example, Fusion and to a lesser extent Objectory); however, this article focuses on OMT since this is what we are familiar with.
In this section we discuss some of the main bottlenecks associated with the current technology. In later sections we show how such problems can be partially or completely resolved by using requirements determination techniques.
The main problems are now listed. A number of these problems have already been discussed in [reference 1].

 

Lack of suitable partitioning

One of the most important and difficult decisions which must be made during project development is how to partition a problem into smaller manageable chunks or subsystems so that each subsystem can be separately analysed and designed. The lack of this feature leads to a number of serious problems:

 

In short, the fact that a given system is not partitioned at a very early stage leads to maintainability and scalability problems. OMT partitions the system into a number of subsystems during the design phase but this approach leads to a new set of complications (see [reference 2]).

 

OMT assumes that the problem is (fully) understood

OMT gives few guidelines for problems whose requirements are not complete. In other words, the method assumes that the objects and classes are known before commencing on the construction of the object model. For this reason, some authors (see [reference 1]) have concluded (for right or wrong) that it is best applied to information systems or to re-engineering situations where data objects are already identified. In this case the construction of the object model is relatively simple.

Our claim is that OMT can be applied to many different kinds of problems (for example, real-time systems) if the correct requirements have been found and that the method is not doomed to solving only administrative systems.

 

Lack of standardisation of the initial requirements document

OMT examines a problem from three different viewpoints (see [reference 3]), namely the object, dynamic and functional (or process) models. The object model is concerned with the main abstractions or classes in the problem domain as well as the long-term structural relationships between them. These are usually generalisation, association and aggregation relationships. Furthermore, the object model is concerned with finding object and link attributes. The dynamic model is concerned with the major external events and control in the system. It is also important to find object states during this phase. Finally, the functional model is concerned with determining what happens in the system; in this case, data flow diagrams play a fundamental role.

A major problem which results from lack of a clear requirements standard is that the distinction between generalisation, association and aggregation relationships is often misunderstood. Worse still, inheritance is misapplied in many situations. This ultimately leads to major maintenance problems. The main cause of these problems is that incorrect relationships are deduced from the requirements.

Based on the above reasoning that a problem can be analysed by looking at it from three almost orthogonal viewpoints, the question is why are requirements documents not written in such a way as to help the analyst to easily find the needed information? It would be highly convenient if a requirements document could be created which reflects the three different analysis views!

The lack of standardisation of requirements documentation is not just limited to OMT. The main consequence of this state of affairs is that the requirements document in its entirety must be read and understood when we wish to construct each of the above three models. This places a heavy burden on those who have to read the document because they have to wade through information which is not immediately relevant to the current activities. For example, the pump delivery system in [reference 4] which was solved by the HP Fusion method has an initial requirements document which must be read several times before we can ascertain what the most important classes are in the problem domain. It would have been much easier to understand if the requirements were set up in such a way that they could easily be mapped to the three analysis views.

 

The difficulty in separating the problem from the solution

One of the major problems with object-oriented analysis methods in general, and OMT in particular is knowing when we are in the problem domain and when we are in the solution domain. This is a common problem with many methods in our opinion (the HP Fusion method being a good example). This is the so-called 'wicked problem' (reference [1]) by which we mean that OMT does not prevent a solution-oriented model being constructed. This leads to a situation in which the specifications become intertwined with the design. There is a need to separate design issues from those which belong to the requirements stage of the software lifecycle.
What is needed is a broader approach to the initial stages of the software development process.

 

OOA is difficult for domain experts and users

Object oriented technology is difficult to learn and to apply correctly. It is true to say that many experienced DP personnel have the greatest difficulty in moving to this technology. All the more reason why we should not burden users with the delights of object diagrams, Harel charts and object flow diagrams! On the other hand, we must have some other means in order to communicate our ideas to the customer; a more general conclusion is that we need to adopt 'ethnomethodology' (see [reference 5]) in which the analyst gathers information in naturally occurring situations where the participants are engaged in ordinary, everyday activities.

 

What has happened to Information Hiding?

Most developers and programmers know about the benefits of using encapsulation and information hiding at the class level. For example, in C++ it is possible to hide the implementation (member data) of a class interface in a so-called private area, thus making it inaccessible to client code. The same principle holds in more general situations. David Parnas (see [reference 7]) has used the term Information Hiding and it is one of the most powerful techniques available in order to create flexible and robust software. However, it has been neglected in the OOA literature and this absence leads to a number of problems when the software needs to be modified. What we would like to see is Information Hiding being used in all phases of the object lifecycle.
This finishes some of the problems that we have encountered with classical OMT. The next section deals with solving a number of these problems.

 

Setting up a Requirements Infrastructure

After having said all this and having accepted that these are major obstacles to successful system development using object-oriented technology, the challenge is now to set up an infrastructure in which these problems can be resolved. We present and elaborate some of the techniques which help us in solving and avoiding the problems of the previous section. These are the basic ingredients from which we create a standard requirements document. Many of the following topics will be discussed in greater detail in the next articles in this series.

 

Partitioning

A number of system development methodologies (such as Shlaer-Mellor (see [reference 8]) require that an analyst partitions a system into separate subject matters, known as domains. Each domain is analysed separately and the domains are connected together through interfaces during design and implementation. OMT, on the other hand does not partition the systems as such but uses the concept of module. This concept is rather artificial in our opinion and it does not really help when developing large systems.

Although there are no hard and fast rules for partitioning a system into domains or subsystems we have found that the application of the ANSI/SPARC model (see [reference 9]) to object-oriented development has proved to be useful. Based on this model we can partition all systems, programs and processes into three separate and more or less independent domains. These domains are based on functionality and are sometimes given the synonyms expertise area, logical subsystem or subject area.

In order to be able to use and apply the ANSI/SPARC model we need to know how the model is partitioned and what its components are. In general, the model consists of three layers, each one having its own set of responsibilities. More precisely, the model is partitioned into the internal, conceptual and external layers. First of all, the external layer is concerned with that part of the application which is directly visible to clients (for example, GUI software); for this reason it is sometimes called the presentation layer. The internal layer is concerned with the 'raw' or unprocessed data which will be transformed in one or more ways in order to be delivered to the external layer. Finally, the conceptual layer is the intermediary between the internal and external layers and it represents the business rules and core activities of an application. In a more general business setting, we see the ANSI/SPARC model being used in the literature (see for example, [reference 10], page 52). In business environments for example, we are concerned with converting raw materials into sellable products by using the knowledge and intelligence of the people in an organisation. In such cases we can think of a workflow scheme consisting of three domains for input (internal layer), processing (conceptual layer) and output (external layer). Specifically, each layer is concerned with the following issues:

 

The different layers may communicate with each other in different ways. For example, a system may be implemented as a batch program in which the output from the internal layer functions as input to the conceptual layer while the output from the conceptual layer is used as input to the external layer. In such cases there is a one-way flow of information but it is also possible for pairs of layers to communicate in a peer-to-peer fashion. For example, it is possible for the internal and conceptual layer to be active and have the ability to send information to each other. In this case we speak of collaborating layers.

 

Getting the Structure Right

The process of partitioning a problem into smaller domains using the ANSI/SPARC model is a recursive one. The process ends when all domains cannot be decomposed further. For these 'terminal' domains there is a set of abstractions which realises the functionality of the domain. Once we arrive at this stage we can apply concept mapping techniques as discussed and elaborated in [references 11 and 12]. These techniques allow the analyst to find the major concepts (later classes) and the links or relationships (such as generalisation, association and aggregation) between them. The advantages of this approach are:

 

One of the advantages of using these maps at an early stage in the software development is that first of all, it promotes communication between analyst and domain expert and secondly it ensures that no design or implementation details are used because, by definition concept maps deal exclusively with abstractions from the problem domain and not the abstractions from the solution domain.

 

System Behaviour and Scenarios

Concept maps deal with the static structure of the system. It is necessary to deal with the external events which trigger reactions in the system. In this case we use the Use-case approach developed by Ivar Jacobson (see [reference 13]). The use cases in our approach are distributed over the different domains which are produced from the ANSI/SPARC model. The advantages of this approach are:

 

In short, we have combined ANSI/SPARC, concept maps and use cases to partition a given system into a number of almost independent subsystems in which both the structure and behaviour can be properly described.

It is interesting to note that the steps advocated in [reference 15] for analysing large problems are very similar to our own techniques. The major addition or simplification in our case is that we have a reproducible technique for partitioning a system and that we use concept maps to produce stable and accurate object diagrams.

The steps are:

1. Identify the high-level subsystems (by ANSI/SPARC, for example)
2. Distribute the use cases over the subsystems
3. Determine the interfaces between the subsystems
4. Design each subsystem internally

 

What are the consequences of our Approach?

The approach in this article is taken because of our basic objective of creating requirements documents which can be used as input to OMT or the Unified Method. The results are not shocking as such but they create an environment in which all problems are described in a standard fashion. In this way, we hope to improve the software process by producing standard documents. The ultimate objective is to reduce cycle times and introduce a factory-based approach to creating software (see [reference 14]).

 

Customer Experience

It has taken us a number of years to realise that we must start with a requirements analysis and determination for a given problem and that it is not enough to start with object diagrams and inheritance hierarchies. Many of our customers agree on this point. We have found that they are able to understand and create a requirement document within days but most find it difficult to partition a problem into domains. The method has been applied to problems in mechanical engineering, holography, accountancy, EDI and in the construction of reusable components. Feedback has been positive, many due to the improved communication between those involved in the software development process.

In the next article we shall discuss the steps that need to be executed in order to create a requirements document which can be used as input to the object, dynamic and functional views in OMT.

 

Acknowledgements

This article contains excerpts from course materials and tutorials on OORD by Datasim BV, reproduced with permission.

 

About the Author

Daniel J. Duffy is a director and founder of Datasim BV, Amsterdam, a training and software development organisation which has been involved with OOT since 1988. He is chief designer of CADObject, a class library consisting of 600 C++ classes which is used to build applications for CAD, computer graphics, engineering and simulated worlds. He has been involved in many types of OO projects in finance, CAD, logistics and engineering . He is convinced that software development should be based on a building block or factory foundation and that new applications and prototypes should be built using ideas, architectures and designs which have been applied from previous projects. Duffy has a Ph.d. in numerical mathematics from Trinity College, Dublin and can be reached on the e-mail address dduffy@datasim.nl.

 

References

  1. Fayad, M.E., Wei-Tek Tsai, R. L. Anthony, M. L. Fulghum Object Modeling Technique (OMT): Experience report, JOOP November/December 1994.
  2. Gilliam, C. An approach for using OMT in the development of large systems, JOOP February 1994.
  3. Rumbaugh, J., M. Blaha, W. Premerlani, F. Eddy and W. Lorensen Object Oriented Modeling and Design, Prentice Hall Englewood Cliffs 1991.
  4. Coleman, D., P. Arnold, S. Bodoff, C. Dollin, H. Gilchrist, F. Hayes, P. Jeremes Object-Oriented Development, Prentice-Hall Englewood Cliffs NJ 1994.
  5. Goguen, J. Formality and Informality in Requirements Engineering, Proc. IEEE Int'l Conf. Requirements Eng., IEEE CS Press, Los Alamitos, California, 1996.
  6. Siddiqi, J. and M. Chandra Shekaran Requirements Engineering: The Emerging Wisdom, IEEE Software, March 1996.
  7. Parnas, D. On the Criteria to be Used in Decomposing Systems into Modules, Communications of the ACM, Dec. 1972.
  8. Shlaer, S. A Comparison of OOA and OMT, Project Technology Inc. Berkeley, CA 94710 USA 1992.
  9. Tsichritzis, D. and A. Klug eds. (1978) "The ANSI/X3/SPARC DBMS framework: report of the study group on database management systems", Information Systems, Volume 3, (173-191), published by Pergamon Press Ltd, Oxford UK.
  10. Bogan, C. and M. J. English Benchmarking for Best Practices - Winning Through Innovative Adaptation, McGraw-Hill, New York 1994.
  11. Duffy, D. Object-Oriented Requirements Analysis: Finding Abstractions and their Relationships, ROAD January/February 1995.
  12. Duffy, D. From Chaos to Classes - Software Development in C++, McGraw-Hill London 1995.
  13. Jacobson, I., M. Christerson, P. Jonsson, G. Overgaard Object-Oriented Software Engineering, A Use Case Driven Approach Addison-Wesley Wokingham UK MA 1992.
  14. Cusamano, M. A. Japan's Software Factories - A Challenge to U.S. Management, Oxford University Press 1991.
  15. Jacobson, I., S. Bylund, P. Jonsson, S. Ehneboon Using contracts and use cases to build plugable architectures, JOOP, May 1995.

 

[ Homepage | Articles ]