[CS Home Page] [Top Level - All Titles] [Top Level - This Title] [Expand This View] [Collapse This View] [Previous Hit] [Next Hit] [Clear Search] [Show Frames]

Expand Search



COMPUTER 0018-9162/96/$5.00 © 1996 IEEE
Vol. 29, No. 10: OCTOBER 1996, pp. 47-58

Contact Tsai at the Dept. of Computer Science, Univ. of Minnesota, Minneapolis, Mn. 55455; tsai@cs.umn.edu.

Advances in Software Engineering

C.V. Ramamoorthy University of California, Berkeley

Wei-tek Tsai University of Minnesota

Software engineering progress has been spurred by advances in applications and enabling technologies. In this overview, the authors highlight software development's crucial methods and techniques of the past 30 years.


Software is the key technology in applications as diverse as accounting, hospital management, aviation, and nuclear power. Application advances in different domains such as these[mdash]each with different requirements[mdash]have propelled software development from small batch programs to large, real-time programs with multimedia capabilities. To cope, software's enabling technologies have undergone tremendous improvement in hardware, communications, operating systems, compilers, databases, programming languages, and user interfaces, among others. In turn, those improvements have fueled even more advanced applications.

Improvements in VLSI technology and multimedia, for example, have resulted in faster, more compact computers that significantly widened the range of software applications. Database and user interface enhancements, on the other hand, have spawned more interactive and collaborative development environments. Such changes have a ripple effect on software development processes as well as on software techniques and tools. In this article, we take a brief look at how the many subdisciplines that make up software engineering have evolved.


DEVELOPMENT PROCESS

Many processes have been proposed to systematically develop software in small, manageable phases and have served as a mechanism to evaluate, compare, and improve the effectiveness of software development. In addition, these processes have suggested techniques and tools for development, personnel training, and overall project planning and scheduling. Some software development processes, like Winston Royce's waterfall model,1 separate development into well-defined phases[mdash]requirements analysis, design, coding, and testing[mdash]which have distinct goals and are performed iteratively.

When the problem or its domain cannot be well understood during specification, a prototyping process[mdash]which can be incorporated into the requirements phase[mdash]is used to develop a system model to test requirement feasibility, appropriateness, and correctness. Sample applications include AI and graphical user interfaces.

Barry Boehm's spiral model2 integrates prototyping with the waterfall model. In the spiral model, requirements, design, and coding are performed cyclically, with each cycle having a modified requirement. The spiral model exemplifies software development's iterative, evolutionary nature as the software is built with a subset of requirements, which are then refined as the software evolves. Some companies use spiral model variants but emphasize generating a working prototype at each stage of development (intermediate checkpoints) to reduce the system test burden at the end of the life cycle.

Evolution

As applications grow and their domains become more complex, the development processes naturally evolve (see the "Extended software development process" sidebar). Object-oriented development processes were developed, for instance, to incorporate modularity, abstraction, and reuse, and to promote programming-in-the-large.

Reusability is incorporated on the basis of composition (reuse existing components as-is), inheritance (reuse components with specialization or extension), design components (design patterns and architectures, as described in the "Software patterns: Design utilities" sidebar), and code (packages and modules).

Real-time and concurrent system domains have also contributed to process evolution. Because processes in these systems emphasize formal, rigorously verified requirements that integrate behavioral and functional views of software, safety and reliability processes have been integrated into the life cycle.

Further influences on software processes come from constraints imposed by regulatory and certification agencies to ensure repeatability and predictability. Key standards for software productivity and quality include the ISO 9000 criteria and the Software Engineering Institute's Capability Maturity Model.3

Various CASE tools have automated parts of the software development process, such as code generation, fourth-generation languages, requirements modeling, formal specification and verification, visual programming, and configuration management. Another force for evolution is knowledge-based development, whose processes convert requirements to code using correctness-preserving transformations.

Software development processes have undergone some changes on the basis of observed results. For example, experienced developers today are likely to spend much more time in the requirements and design phase than on changing the source code, because this approach has been shown to be more productive than directly changing and debugging code. The IBM Cleanroom process, initially proposed by Harlan Mills, incorporates this approach into the development process by having the developers write code without compilers, thereby making them inspect code thoroughly before compiling and debugging it. Similarly, the process also has verification-based inspections carried out at each phase by independent teams. The implementation team is different from the specification team, and this forces the specification team to come up with relatively correct, complete, and consistent specifications.

Collaboration

Large project development has almost always been a group activity, with various engineers and domain experts collaborating to successfully complete a project. Tools that support collaborative software engineering and development are called computer-supported cooperative work applications, or groupware. Groupware applications include brainstorming, design decision making, strategic planning, problem solving, conflict resolution, and software inspection, which let people at different locations cooperate in creating, organizing, and sharing information during software development.

Traditional development processes chiefly focus on the developer's view of software development and neglect the end users' views. As software becomes easier to use, end users' domain knowledge can be combined with the analyst's technical knowledge to realize an effective software system. Multimedia and interactive technologies can play a vital role in providing an effective communication channel between the users and analysts.

Future processes

Because the software market continually evolves, software applications must adapt to newer platforms, environments, and changing demands. This means that rapid development processes for evolving software will continue to emerge on the basis of such factors as these:



REQUIREMENTS ENGINEERING

Requirements engineering is the process of specifying the user's needs as analysis components for use during system development. The resulting specifications guide software design, implementation, and testing. Any faults or inconsistencies in the requirements documents, if not detected early, can significantly increase the development cost.

One aim of requirements engineering is to determine the feasibility and cost of the system being built. Prototyping and simulations help developers better understand user requirements as well as design the initial product. In particular, when the users may have limited computer knowledge but extensive domain knowledge, as in the case of physicians and cardiac medical devices, prototyping has effectively determined the product features as well as their user interfaces.

Other requirements engineering tools, such as simulation packages that support requirements modeling and perform simulations, can be used by both users and analysts to understand the problem and its solution.

Consistency and completeness checks (a verification and validation technique) of requirements documents is important to identify and remove any inconsistencies and incompleteness between the user requirements documents and the analysis documents.

C&C checks can be either external, to ensure that the analysis documents meet the user requirements, or internal, to ensure that the analysis products do not contain conflicting information and that all the required information is present.

To support the C&C checks during the requirements analysis phase, some tool support is useful, such as cross-reference tables that maintain the traceability information. Such tools can automate some aspects of the C&C checks, but they can perform only certain mechanical checks on the requirements artifacts. Manual checks such as inspections still play an important role during the C&C activities for detecting possible semantic inconsistencies in the specifications.

Languages and tools

Requirements languages and tools have also evolved over the years to support changing application requirements. Natural-language specifications were the earliest forms of requirements and are still widely used. Although easy to write and understand, they suffer from specification ambiguities.

Data-rich business applications such as banking led to semiformal approaches such as structured analysis and graphical notation such as dataflow diagrams for specifications. Control-rich applications, such as telephony, have led to notations such as finite-state machines and Petri nets for specifications. Finite-state machines have been later extended in languages such as RSL and Statecharts to support hierarchy and concurrency for modeling complex real-time systems.4

Although early analysis techniques focused on input-output transformations, the information model and the control aspects, OO analysis emphasizes domain modeling and the data aspects of the applications by representing classes, and objects, and the various relationships between the objects such as specialization and associations.

Techniques

The rise in safety-critical applications has led to the development of many formal specification and verification techniques. Formal software specifications have well-defined semantics and can be analyzed with mathematical methods. In particular, C&C checks can be performed to identify potential specification problems. Their main usage has been in applications where the cost of failure can be extremely high.

Summary

As requirements engineering has matured, techniques such as scenario generation, viewpoint resolution, decomposition, multilevel specification checking, inspections, and increased user involvement have taken on new significance. Verification and validation activities to identify inconsistencies and incompleteness between the user requirements and the analysis artifacts are important to eliminate potential problems early in the life cycle. Prototyping and simulations can be used for feasibility and cost estimates as well as for user-interface designs and performance studies. A balance between specification language formality and simplicity is needed to support the various specification aspects such as data, transformation, control, and timing, as well as problem understanding and analysis.


DESIGN

Software design, which is a creative process, describes how software requirements are achieved. Among early approaches to make software design more systematic, Ed Yourdon's structured design emphasized top-down process decomposition to identify and define the functional modules and their interactions. Michael Jackson's method focused on the data transformation from input to output and on designing the functions to effect the transformation.

A more contemporary approach is OO design, which features a system designed with loosely coupled and highly cohesive components. Abstraction, encapsulation, and reuse are integral to OO design.

Design activities are categorized as

High-level design focuses on decomposition and functional allocation, mapping requirement items to design components, whereas low-level design generally involves code.

Architecture

Depending on system needs, the software architecture used with high-level design can vary. Procedural design with the call-return model is a commonly used architecture, for example. Other architectures include event-driven[mdash]independent system features are mapped to the external stimuli; layered[mdash]system functionality is distributed among different layers, with each layer providing a predefined functionality for the layer above; dataflow models or tool-composition-like architecture (such as Unix pipes)[mdash]components have well-defined I/O data interface formats; repository or blackboard[mdash]shared information is stored in a central repository; client-server[mdash]service is distributed among various servers in different locations; and state machine[mdash]the system behavior is characterized as sets of states with a set of operations associated with each state.

Patterns

A newer design approach documents common software designs as patterns.5 Design patterns are frameworks that software engineers can use to communicate the pros and cons of their design decisions to others. This promotes the use of design qualities, such as repeatability (using a design process in more than one context), understandability (communication of the design through proper documentation), and adaptability (the ease with which a design is modified or extended).

Reuse

Overall architecture and program designs can be reused. As object technology matures, the scope of reuse in the design phase of the life cycle has dramatically increased. Domain-specific design patterns and frameworks will also document solutions to design problems in specific domains for reuse later.


PROGRAMMING LANGUAGE

The earliest programs, written in machine code, were difficult and time-consuming to develop, understand, and debug. Moreover, the code was not portable. Programs were next written in assembly languages, then in high-level languages such as Fortran, Cobol, Pascal, Ada, Algol, and C, and more recently in OO programming languages such as Smalltalk.6

The overriding characteristic of this evolution has been the increasing support for abstractions and modularity that facilitate program development, reduce cycle time, and improve program understanding and maintenance.

Concepts

Structured programming concepts have profoundly affected programming languages. Edsger Dijkstra suggested that use of the goto statement should be limited.7 Thus modern languages support three control structures (sequence, selection, and iteration) for writing structured programs. Modularity has significantly reduced program complexity by dividing a program's development tasks into smaller, more manageable modules.

Other concepts that influenced language development include encapsulation of data and behavior, which were introduced in the Simula 67 language. It has since been recognized as a key concept in software development and supported in many modern OO programming languages such as Smalltalk.

Information hiding8 limits access to only relevant data. Further, the use of block structures in a programming language is useful for limiting the scope in which an identifier can be used in a program. Encapsulation, information hiding, and block structure are indispensable features of modern programming languages.

To combat rising software development costs, reusability and extensions to existing components and systems are solutions that have received attention. Reusability enables software firms to leverage their existing applications in building newer applications, which reduces both cost and effort, particularly when product families are involved. OO languages can also support reusable domain abstractions with inheritance (both interface and implementation can be inherited) and parameterization.

Other influences on programming languages include GUIs and the Internet. Previously, GUI code was difficult to develop because programmers had to write textual code to generate the interfaces and their behavior. Now, visual programming languages such as Visual Basic and Visual C++ have greatly reduced GUI development effort.

The Internet's influence is as a platform for an integrated environment to support GUI design, network communications, and other program development features. Java, for example, is an integrated programming language environment recently developed for Internet software development.

Summary

These concepts emerged as a result of problems experienced with software developed with low-level machine code and assembly languages. The 1970s saw the development of Pascal and Algol, languages that emphasized simplicity and ease of use, and of C and Ada, which focused on incorporating language constructs that provided support for complex tasks such as systems programming and real-time concurrent embedded system development. The development of visual programming languages in the late 1980s and early 1990s for GUI design exemplify the need to balance simplicity and flexibility in programming languages. The development of Java, which eliminates features such as pointers and overloading, also demonstrates the need to keep programming languages simple, while still providing powerful language constructs for performing different tasks.


TESTING

Software reliability can be achieved through software testing, software inspection, reliability modeling, fault tolerance, formal specification and verification, and simulation. Like other aspects of software engineering, testing techniques, tools, and management have matured over the years. In part, this has been necessitated by systems' increasing reliance on software, language evolution, and legacy code maintenance. Mission-critical systems, of course, have always required extensive testing to ensure correct behavior.

Traditional techniques to test procedural programs include dataflow testing, structural testing, symbolic execution, random testing, mutation testing, reliability model-based testing, and functional testing.9 Techniques for testing OO programs, such as method and message sequencing, are becoming more prevalent.

Regression

Legacy code forced a change in software engineering's focus to consider both development and maintenance. One outcome is regression testing, on which many companies spend thousands of hours and dollars. Some of the relevant issues that need to be resolved are test case selection, test case dependency, and test case revalidation.

To meet competitive pressures, software development is usually under fire to release "good enough" software as soon as possible. This situation has raised the questions of when to stop testing and how much to test, which have been addressed by software reliability models.10

As GUIs become more popular, new and innovative testing techniques, such as capture-and-replay, are needed to test them.

Testing individual components is not as difficult as testing the whole system. Most testing techniques do not address integration, instead focusing on unit and module testing.

Inspection

Software inspection techniques, first proposed by Michael Fagan in 1976, vary and are widely practiced. Some inspections look only at specific system aspects; others apply information generated from the previous development phase to the next phase (for example, using the checklists generated during the requirement phase in the design inspection). Verification-based inspection requires a rigorous checking process and uses predicate calculus and program verification techniques to guide the inspection. Inspection, no longer limited to the code, has been extended to requirements, design, and test. To ensure reliability in those systems that require it, inspections can be repeated. N-fold inspection has been shown useful in detecting faults at various stages of the software development process.

Other techniques

Group testing is a practical but not widely known technique that identifies faulty components. This technique uses combinations of working and new components to find out which new components are faulty. Even though this technique is not safe[mdash]that is, not 100 percent reliable[mdash]it is still very useful and can be automated to test large programs.

Techniques that allow selective reuse of test cases can successfully reduce both the cost and effort of testing. By extensively reusing test code from previous applications and by using high-level abstractions and scenarios during development, testing can be reduced from months to weeks. Object-oriented technology can be used for writing test cases by applying high-level abstractions; test cases can be generated via a mapping between the abstractions and corresponding code.

Formal verification techniques aim at proving that a program works correctly for all possible sets of inputs. Since Robert Floyd proposed formal verification techniques for software, techniques such as assertions, symbolic execution, and pre and postconditions have been used for mission and safety-critical systems. C.A.R. Hoare, Edsger Dijkstra, and others have done considerable work on these techniques. These and similar techniques will continue to be useful in highly reliable system development.

Summary

In spite of available tools, testing is still costly and some steps[mdash]such as test case generation[mdash]are still labor-intensive. Testing should be embedded into the entire software life cycle[mdash]requirements review, design review, code inspection, unit testing, multiple stages of integration testing, functional testing, acceptance testing and field testing. The objectives of each phase (and the tools needed in each phase) should be clearly defined, and the responsibilities and procedures associated with each phase should be established. These different phases of testing can proceed concurrently with each other. Bug and modification tracking is also important during each phase of the life cycle.


RELIABILITY AND SAFETY

If it does not accomplish its tasks during operation, software is said to have failed. Failures are caused by faults. System faults or defects are the result of human errors made during the development process.

Making the system fault-free is not always possible, so mechanisms to detect faults and recover from them may have to be built into the system. Redundancy, in both space and time, is a major technique to implement fault tolerance in software. Traditional software fault-tolerant techniques include retries (where the instructions are executed again and transient failures may be masked out); recovery blocks, developed by Brian Randell, in which redundant software routines are made available for execution once the system detects the failure of primary software routines; N-version programming, developed by Algirdas Avizienis; and triple-modular redundancy. These techniques use failure detection, graceful degradation, and sometimes system reconfiguration.

The overhead associated with these reliability mechanisms can be considerable. For example, the recovery block scheme increases the system size, requires more development effort, and requires an acceptance test designed into the software to check whether the versions are operating as specified. N-version programming requires a voting mechanism. These added portions of the software often become crucial, because while executing the backup routines, the failed software is operating in a standby mode and thus cannot afford to fail again.

Overhead may also render these techniques inappropriate in some applications. Commercial, medical, and military systems have different priorities, and they treat software hazards, hazard severity, component criticality, and mission duration differently. The priority of these attributes can lead to distinctly different approaches to reliability and safety. For example, a military shipboard application may assign low importance to size, weight, and power consumption, medium importance to cost, and high importance to safety and reliability, while an implantable medical device may assign highest importance to size, weight, and power consumption, as well as safety, and low importance to cost. The emphasis on size limits the application of redundancy techniques.

Reliability models have often concentrated on the developer's point of view (defect-oriented) rather than on the user's, which is based on how much service is received in the presence of failure. A service-oriented modeling of failure analysis rather than a defect-oriented model will be able to address the consumer's view of software and its failure.

Although reliability can be predicted in hardware once the system structure and component reliability are known, this is not possible in software. For example, software will not age while hardware does, and software is constantly being changed while hardware is relatively stable. The notion of software building blocks, which are fully tested components that can be combined to produce a reliable system, is needed to accurately predict software reliability.


MAINTENANCE

The cost to maintain legacy programs is enormous because of the programs' complexity and size. Les Belady and Manny Lehman studied industrial software maintenance and posited the laws of ever-changing software and increasing complexity.11

Research

Various software maintenance research projects, such as ESPRIT II REDO, have attempted to address the software maintenance and reengineering problem. Research has identified such techniques as fault-prone code segment identification, preventive maintenance techniques, configuration control, traceability, program slicing and its extensions, variable classification, dependency analysis, call-graph generation, entity-relationship diagram generation, ripple-effect analysis,12 and path analysis. However, software maintenance cannot be completely automatic, and most practitioners and researchers take a semiautomatic approach, applying human expertise as needed.

Practice

Current software maintenance practices are program understanding, round-trip engineering, increasing end-user involvement, reengineering, object recognition, integration of configuration management and version control and testing tools, and regression testing. For example, program understanding can be broadly divided into two categories: control-based software understanding techniques focus on the control aspects first, whereas the data-centered approach focus on the data aspect first.

Some software companies use a maintenance-based software development paradigm, in which software is developed mainly by changing the existing source code. This is particularly the case with compilers and operating systems development. Changes made to the code are first mapped to the user documents, then validated through inspections and/or regression testing. This process is supported by forward-engineering tools such as compilers, debuggers, and configuration management systems.

However, reverse-engineering tools can enhance this process because they can help programmers understand the existing code. This has made round-trip (that is, both forward and reverse) engineering tools important in supporting configuration management, version control, testing, and regression testing.

Trends

Reengineering is integral to software maintenance, and the current trend leans toward reengineering data sets or legacy database systems. Old software often uses old database technology such as VSAM (virtual storage access method) or hierarchical database management systems. The migration from these technologies to relational or OO database management systems is the subject of ongoing research.

In another trend, end users can provide valuable input to software maintenance. They can identify critical parts of the system and the way the system should evolve in terms of performance and features.

Yet another significant trend is reengineering procedural code into its equivalent OO code. Objects can be identified by examining the common blocks in Fortran programs or the file section in Cobol programs. An OO class consists of both data and method definitions, but it is much harder to identify the methods associated with classes than to identify the data. Furthermore, methods interaction must be examined to eliminate complex message passing in OO code.

The Internet and related technology will undoubtedly change the way software is maintained. Maintenance may someday be carried out in a distributed, collaborative manner. Software interoperability will become simpler because of platform-independent languages such as Java and protocols such as the Common Object Request Broker Architecture.

Software maintenance will not disappear as software technology improves. Even if code can be automatically generated from design specifications, the specifications must be maintained, and[mdash]despite enhanced reusability and reliability from OO development[mdash]the software will still change.13


CONCLUSION

Software engineering has changed dramatically in the past 30 years. Although both software's application and its enabling technologies have evolved, the main issues of software engineering remain relatively stable.

Addressing these concerns will not be a simple matter. Hardware technology has matured such that engineers can use the exactly same process to produce reliable designs almost every time, but software technology has not reached that stage of maturity. As an engineering discipline, software engineering techniques must be consistently easy to use and consistently effective in practice to be useful. Software engineering as a whole will eventually mature when any software development group that follows a prescribed process and associated techniques can consistently[mdash]and quickly[mdash]produce reliable, reusable, and robust software.


References


1. W.W. Royce, "Managing the Development of Large Software Systems: Concepts and Techniques," in Proc. WESCON, 1970, pp. 1-9.
2. B.W. Boehm, "A Spiral Model of Software Development and Enhancement," Computer, May 1988, pp. 61-72.
3. W.S. Humphrey, Managing the Software Process, Addison-Wesley, Reading, Mass., 1989.
4. C.G. Davis and C.R. Vick, "The Software Development System," IEEE Trans. Software Eng., Jan. 1977, pp. 69-84.
5. E. Gamma et al., Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading, Mass., 1994.
6. A. Goldberg and D. Robson, Smalltalk-80: The Language and its Implementation, Addison-Wesley, Reading, Mass., 1983.
7. E.W. Dijkstra, "Goto Statement Considered Harmful," Comm. ACM, Mar. 1968.
8. D.L. Parnas, "On the Criteria to be Used in Decomposing Systems into Software Modules," Comm. ACM, Dec. 1972, pp. 1,053-1,058.
9. C.V. Ramamoorthy and S.F. Ho, "Testing Large Software with Automated Evaluation Systems," IEEE Trans. Software Engineering, 1975, pp. 46-58.
10. D. Musa, A. Iannino, and K. Okumoto, Software Reliability: Measurement, Prediction, Application, McGraw-Hill, New York, 1987.
11. L.A. Belady and M.M. Lehman, "A Model of Large Program Development," IBM Systems J., No. 3, 1976, pp. 224-252.
12. S.S. Yau, J.S. Collofello, and T.M. MacGregor, "Ripple Effect Analysis of Software Maintenance," in Proc. IEEE COMPSAC, CS Press, Los Alamitos, Calif., 1978, pp. 60-65.
13. F.P. Brooks Jr., The Mythical Man-Month, Anniversary Edition, Addison-Wesley, Reading, Mass., 1995.

C.V. Ramamoorthy is a professor of electrical engineering and computer science at the University of California at Berkeley. He has been a senior staff scientist at Honeywell and professor of electrical engineering and computer science and the chair of the Computer Science Department at the University of Texas at Austin. Ramamoorthy received a PhD from Harvard. He is the founding editor-in-chief of the IEEE Transactions of Knowledge and Data Engineering; the founding co-editor-in-chief of the Journal of Systems Integration; the cofounder of the IEEE International Conference on Data Engineering; and a founding member of the Board of Directors of the International Institute of Systems Integration. He is a fellow of the IEEE and a senior fellow of IC2 (Innovation, Creativity, and Capital Institute) of the University of Texas. He has received the Taylor Booth Award, the Richard Merwin Award, the Group Award in Education, and the IEEE Centennial Medal.

Wei-tek Tsai is a professor of computer science at the University of Minnesota in Minneapolis. His current interest is in software engineering.

Tsai earned an SB in computer science and engineering from MIT, and an MS and a PhD in computer science from the University of California at Berkeley. He is on the editorial board of the IEEE Computer Society Press, Journal of Software Maintenance, Journal of Software Engineering and Knowledge Engineering, and Journal of AI Tools.

James O. Coplien is a principal investigator at Bell Labs, where he specializes in patterns of real-time and fault-tolerant systems in multiparadigm design research, and in studying the social patterns of highly productive software organizations. Contact him at cope@cope.ih.lucent.com, http://www.bell.labs.com/people/cope.

Software patterns: Design utilities

James O. Coplien Bell Laboratories

A pattern is a solution to a problem in a context.

Patterns are a recent software engineering problem-solving discipline that emerged from the object-oriented community. Patterns have roots in many disciplines, including literate programming, and most notably in Christopher Alexander's work on urban planning and architecture.1

The goal of the pattern community is to build a body of literature to support software design and development in general. There is less focus on technology than on a culture to document and support sound design. Software patterns first became popular with the publication of Design Patterns.2 But patterns have been used for domains as diverse as development organization and process, exposition and teaching, and software architecture. At this writing, the software community is using patterns largely for software architecture and design.

Here is a pattern used in Lucent telecommunication products such as the 5ESS Switching System (extracted informally from a 1996 book3):

Name: Try All Hardware Combos

Problem: The control complex of a fault-tolerant system can arrange its subsystems in many different configurations. There are many possible paths through the subsystems. How do you select a workable configuration when there is a faulty subsystem?

Context: The processing complex has several duplicated subsystems including a CPU, static and dynamic memory, and several buses. Standby units increase system reliability. Sixteen possible configurations (64 in the 4 ESS) of these subsystems give fully duplicated sparing in the 5ESS. Each such configuration is called a configuration state .

Forces: You want to catch and remedy single, isolated errors. You also want to catch errors that aren't easily detected in isolation but result from interaction between modules. You sometimes must catch multiple concurrent errors. The CPU can't sequence subsystems through configurations since it may itself be faulty. The machine should recover by itself without human intervention; many high-availability system failures come from operator errors, not primary system errors. We want to reserve human expertise for problems requiring only the deepest insights.

Solution: Maintain a 16-state counter in hardware called the configuration counter . There is a table that maps that counter onto a configuration state. The 5ESS switch tries all side 0 units (a complete failure group), then all side 1 units (the other failure group), seeking an isolated failure. When a reboot fails, the state increments and the system tries the reboot again. The subsequent counting states look for multiple concurrent failures caused by interactions between system modules.

Resulting Context: Sometimes the fault isn't detected during the reboot because latent diagnostic tasks elicit the errors. The pattern Fool Me Once solves this problem. And sometimes going through all the counter states isn't enough; see the patterns Don't Trust Anyone and Analog Timer .

Rationale: The design is based on hardware module design failure rates (in failures in a trillion (FITs)) of the hardware modules. The pattern recalls the extreme caution of first-generation developers of stored program control switching systems description.

This is a good pattern because:

  • It solves a problem: Patterns capture solutions, not just abstract principles or strategies.
  • It is a proven concept: Patterns capture solutions with a track record, not theories or speculation.
  • It describes a solution that isn't obvious: Many problem-solving techniques (such as software design paradigms or methods) try to derive solutions from first principles. The best patterns generate a solution to a problem indirectly[mdash]an appropriate approach to deal with the most difficult problems of design.
  • It describes a relationship: Patterns don't just describe modules, but describe deeper system structures and mechanisms.
  • It has a significant human component (minimize human intervention). All software serves human comfort or quality of life; the best patterns explicitly appeal to aesthetics and utility.

A pattern language defines a collection of patterns and the rules to combine them into an architectural style. Pattern languages describe software frameworks or families of related systems. Today, the pattern discipline is supported by several small conferences, by a broad spectrum of activities at established software engineering conferences, and by a rapidly growing body of literature.

References


1. C. Alexander et al., A Pattern Language, Oxford University Press, Oxford, UK, 1977.
2. E. Gamma et al., Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading, Mass., 1995.
3. M. Adams et al., "Fault-Tolerant Telecommunication System Patterns," in Pattern Languages of Program Design - 2, J. Vlissides, J. Coplien, and N. Kerth, eds., Addison-Wesley, Reading, Mass., 1996, pp. 549-562.

Extended software development process

Robert C.T. Lai International Software Process Constellation

Competitive pressures are causing companies to add investment justification to the development life cycle prior to or as part of requirements engineering. Investment justification is based on market information, market characterization, and domain characterization. New modes of marketing and distribution, such as World Wide Web-based distribution, will also extend the life cycle to include services such as just-in-time compilation and production. In addition to distribution, training and customer services will also be delivered via the Internet. This will result both from the pull of the market and applications of new software engineering technologies.

Market forces include

  • the availability of standardized, reusable software building blocks, such as the Java application programming interface and Microsoft's Foundation Class;
  • the increasing popularity of software-based and software-reliant consumer products and services, such as games, appliances, and automobiles; and
  • the declining cost of hardware and communications.

The expanded software market will support numerous applications implemented for the same domain as well as requiring many variations of a software application within the same domain. This will require standardized processes to support the production of software families. David Parnas and Paul Clements hinted at this need in 1986 in their proposal for a rational design process.1

Only recently has the technology for defining and modeling processes been developed sufficiently to precisely describe such rational processes, providing the support both for standardizing such processes and for rapid process improvement. An example of such a process is Lucent Technology's FAST software development process,2 which is being used in Lucent's electronic switching software, 5ESS.

The technologies that support the extended life cycle are:

  • Requirements: Requirements determination techniques, such as commonality analysis,2 support the development of software families in terms of their commonalities and differences. This is a good starting point for the design of a rational software development process because the reasoning in subsequent steps can use the results of commonality analysis as assumptions. This also makes it possible to transform market information to requirements in terms of subrequirements for implementing certain variations among family members.
  • Design: By encoding a methodology such as Booch's object-oriented design method in a rational process, reusable domain abstractions that result from domain engineering can be modeled with object-oriented languages.
  • Implementation: Language constructs such as the package interface, which can have multiple implementations in Java, can support a rational process that includes precisely specified modules.
  • Distribution: With the popularity of Internet software distribution, cross-platform-executable, just-in-time software distribution becomes possible. Java interpreters and CORBA are mechanisms that support such distribution. Distribution then becomes part of the development process, which becomes part of the delivered product.

Under this kind of extended software development, data and metrics are collected from the product and the process, and both help improve the process and products.

Software development processes are mechanisms to evaluate, compare, improve, and automate software development and distribution. Domains, such as real-time and concurrent systems, have contributed to the evolution of processes. Their effect is twofold:

1. Rational processes can be encoded as products and delivered to organizations for enactment, so that the organization has a clearly defined mechanism for following a process that conforms to the SEI's CMM or to ISO 9000.
2. Encoding of a rational process can include data collection while the process is being enacted; such data can be added to the process to guide its improvement in time to generate a new version of the process that can be directly inserted into the enactment environment.

In this way, software maintenance practices are incorporated into software distribution and installation. The development process is then a part of the software product, making just-in-time production more realistic.

References


1. D.L. Parnas and P.C. Clements, "A Rational Design Process: How and Why to Fake It," IEEE Trans. Software Eng., Feb. 1986, pp. 251-257.
2. N. Gupta et al., "Auditdraw: Generating Audits the FAST Way," Proc. Third IEEE Int'l Symp. Requirements Eng., IEEE, Piscataway, N.J., Jan. 1997, to appear.