Home > Abstract

Abstract


 

 

A Method for Extracting and Stating Software Requirements that a User Interface Prototype Contains 
 
 
 
 
 
 
 
 
 
 

By Alon Ravid

 

A Method for Extracting and Stating Software Requirements that a User Interface Prototype Contains 
 
 

Research Thesis 
 
 
 
 

Submitted in partial fulfillment of the requirements

For the degree of Master of Science

in Computer Science 
 
 
 

Alon Ravid 
 
 

Submitted to the Senate of the Technion – Israel Institute of Technology

Nisan 5759  Haifa   March 1999

 

 

The work described herein was supervised by Prof. Daniel M. Berry and Prof. Eliezer Kantorowitz under the auspices of the Computer Science Committee. 
 
 
 
 

I wish to thank my family for their great support during the production of this thesis.

 

Table of Contents

 

List of Figures  
List of Tables

 

  Abstract

User interface (UI) rapid prototyping (RP)(UIRP) as a requirements elicitation technique, has come to be in common use in software development projects in recent years. Prototype-oriented requirement specification involves some difficulties, some of which are almost entirely ignored by the voluminous literature about the subject.

  • Given an informal system description, it is not obvious how to systematically and efficiently construct a user interface prototype (UIP).
  • Once the UIP is developed and agreed upon, how is the information that it contains transmitted to the programmers? Part of the information appears explicitly in the requirements documentation, part is expressed indirectly by other statements in the requirements documentation, and part does not appear anywhere even though it is known, understood, and agreed upon by all the people involved in the project, by virtue of their having worked together to produce the prototype.
  • After completing the UIP, a method is needed to integrate it with other requirements models of the system.
  • A method is needed to capture the UIP’s semantics and state them formally in order to provide a suitable basis for further development and we avoid losing the indispensable knowledge embodied in the UIP.

This research examines these difficulties and proposes a method to deal with them, based

  • on a thorough study of UI RP that led to an overall view of the subject,
  • the experience gained and similar difficulties encountered during the development of a UIP for a highly complicated simulator for generating infrared (IR) scenes, called the Target Scene Generator (TSG), and
  • on existing methods and notations, avoiding inventing new ones.

It is argued that it is impossible to define a method which is applicable to all kinds of systems with a UI. However, it is possible to define guidelines to tailor the method used to develop a given system. It is concluded that it is necessary to answer the following key questions:

  1. What does a UIP say and what does it not say?
  2. What is the right way to formalize and present requirements that are specified and embodied in a UIP?

These questions are answered by taking a generalized approach that is based on an overall view of UI RP and every-day RP reality.

  • First, prototype-oriented requirements analysis is characterized, and a distinction is made about what are common and different between this process and other requirements analysis techniques.
  • Then, a systematic approach to UIP construction for a given system under development (SUD) is formed.
  • Then, the kind of requirements information, both explicit and implicit, that a UIP contains is identified. That is, what the UIP says and does not say are identified.
  • Finally, principles are defined for choosing modeling techniques that properly represent the kind of requirements information a UIP contains.

The systematic approach, the kinds of information, and the modeling techniques mentioned above together form a practical solution that is based on an overall scheme that integrates the UIP to other requirements models and assures that indispensable knowledge is not lost.

As a case study, the effectiveness of the proposed method is demonstrated by applying it to some typical examples from the development of the TSG system’s UIP. In addition, a description of the prototyping-oriented construction of TSG system’s UIP is followed by a list of valuable lessons learned from the prototype construction. These lessons appear to be applicable to the construction of other systems.

 

List of Abbreviations


ADT Abstract Data Type
AT Acceptance Tests
CASE Computer Aided Software Engineering
ERD Entity Relation Diagram
GUI Graphical User Interface
HCI Human Computer Interaction
ICD Interface Control Document
IR InfraRed
LEL Language Extended Lexicon
MVC Module/View/ Controller
OO Object Oriented
OAD Occupation Analysis Document
OMT Object Modeling Technique
RAD Rapid Application Development
RFP Request For Proposal
RP Rapid Prototyping
SRS Software Requirements Specification
TSG Target Scene Generator
UI User Interface
UIP User Interface Prototype
UIPing User Interface Prototyping
UML Unified Modeling Language
UoD Universe of Discourse
UUT Unit Under Test
VCR Video Cassette Recorder
  1. Introduction

User interface (UI) rapid prototyping (RP) (UIRP) is a requirements elicitation technique, used to determine the functionality, UI, data structure, and other characteristics of a system. User requirements are explored through experimental development, demonstration, refinement, and iteration. UIRP is in common use in software development projects in recent years. The UI prototype (UIP) is built during the requirements analysis and specification phases of a software project. The products of this process are various documents such as a software requirement specification (SRS), an occupation analysis document (OAD), and the prototype. The process of creating a UIP involves some difficulties. Once the prototype is developed and agreed upon, how is the information that it contains captured and represented in the other analysis documents? What is the right way to formalize and present requirements which were elicited and presented using the prototype, in order that they will be transmitted to the programmers and testers? It is not clear how to integrate the prototype with other models of the system. Typically the information implicit in the prototype is left in the prototype and is not known until the prototype is used to answer questions. All these difficulties and some others were encountered in the course of developing a UIP for a highly complicated simulator for generating infrared (IR) scenes, called the Target Scene Generator (TSG) [40] and [41]. This research examines these difficulties and proposes an approach to deal with them, based on the experience gained during the development of the TSG system. An attempt is made to define a method to identify, extract, and state requirements which were attributed within a UIP in order that the prototype and the analysis documents provide a suitable and a traceable base for further development and testing. An answer is given to the questions:

  • What does a UIP say and what does it not say?
  • How can a UIP be integrated with other models of the system?

This approach is demonstrated by applying it to examples from the TSG system requirement specification process.

    1. Background

In recent years, I served as the group leader of the simulation and mission planning software team at RAFAEL Missile Division. Currently, I am the software system engineer for a highly complicated simulator for generating IR scenes, called the TSG.

In the development of such systems, we tend to use standard development environments, such as Windows NT and UNIX. The growing popularity of these environments and the fact that most of our customers are very familiar with them has caused a rapid increase in the relative software portion of the overall system. Requirements for UIs, operational modes and other capabilities are presented to us during the early development stages of projects. Sometimes the requirements are stated in terms common to those environments or by giving examples from other applications the customer uses. Moreover, more and more system requirements, some of them very complex, which in the past were directed to other components of systems, are now directed to software.

The ongoing penetration of software to these systems, the growing portion of software in over-all development activities, and the increasing complexity of requirements make requirements elicitation and requirement specification long, difficult, and very error prone.

On the other hand, there is a large variety of rapid application development (RAD) tools available now. These tools facilitate rapid but low cost development of requirements prototypes and support iterative prototype refinement with the customer.

This reality has caused us to use UIPs as an essential aid for requirements elicitation and specification.

The complexity of the TSG system is caused by a variety of factors:

  1. We have never developed a similar system in the past.
  2. To the best of our knowledge, a similar system of such complexity has not been developed anywhere in the world.
  3. The system combines problems from various disciplines, requiring multidisciplinary solutions.
  4. The customers had difficulties defining their needs and requirements because the system was supposed to completely change their work methods and supply them with a new set of tools and aids that they have never used before.
  5. The traditional methods used by our group for requirements elicitation were not suitable for this system and did not address all needed aspects of the system.

Since we faced many difficulties during the early stages of the system specification, I decided to rapidly develop a throwaway UIP. This UIP was to improve the communication with the customer and to help complete requirements specification within a reasonable amount of time. Software engineers, human engineering people, and users group representatives were involved in the development of the prototype. This approach turned out to be successful. We discovered some major misunderstandings with the customer and many contradicting requirements. As is recommended by many including Fred Brooks [6], the first version of the prototype was thrown away, and we started the development of the production version from scratch.

At the end of this process, we wrote a system specification that was accepted by all the parties involved. It was clear to us what was required from the system and which of these requirements will be addressed by the software. The products of this process were an SRS, a UIP, and an OAD written by the human engineering group. The OAD resembles a draft version of a tutorial and a user’s manual.

    1. Problem Description

The process of creating the TSG prototype involved some difficulties, which I believe are common to other prototyping-oriented projects as well. Given an informal problem description, it is not obvious how to systematically and efficiently construct a prototype. It is hard to distinguish between requirements and design details. After completing the prototype, a method is needed to integrate it to other models of the system. A portion of the information implicit in the prototype is left implicit and is not known until the prototype is used to answer questions. A method is needed to capture the prototype’s semantics, and state them formally, in order that they will provide a suitable and traceable base for further development and testing. This problem was discovered unexpectedly while we were preparing the project to be reviewed for ISO 9000.3 compliance by the Israeli Standards Institute. During a preceding internal review, I was asked to present the project and the software development documents we produced up to the day the review was conducted. I presented the system specification documents, the SRS, the OAD, and the prototype. The prototype aroused additional questions about the system. Obviously, I used the prototype to answer these questions. After I finished answering these questions, one of the reviewers asked “Where is all this information written?” Part of the information appeared in the SRS, part was expressed indirectly by other statements in the SRS, part was written in the OAD, and part was not written anywhere even though it was known, understood, and agreed upon by all the people involved in the project by virtue of their having worked together to produce the prototype. This undocumented information included indispensable knowledge about the system, knowledge which seemed to be essential for new programmers joining the group and for maintenance personnel who will have to support the system in the near and far future.

    1. Thesis Objectives

The primary objective of this thesis is to answer the following three key questions:

  1. What does a UIP say?
  2. What does a UIP not say?
  3. What is the right way to formalize and present requirements which are specified and embodied in a UIP?

In order to answer these questions the following approach was chosen, First, characterize the prototype-oriented requirements analysis process and determine what are common and different between this process and other requirement analysis techniques. Then, identify the information, both explicit and implicit, that a UIP contains, or in other words, define what the UIP says about the system. Finally, suggest a way to capture and represent this information in the analysis products and to integrate the UIP to other models of the system. These goals can be achieved by modeling the process, by examining various types of prototypes, particularly UIPs, trying to estimate how they influence the development process, by examining different approaches to prototype development, including approaches which are used by engineers from other disciplines, such as human engineering, and by learning from the experience gained during the development of a medium scale UI-intensive project, the TSG simulator.

The objective of the thesis is

  • to find techniques to deal with these problems based on existing methods and notations (avoiding inventing new ones) and on prototyping experience,
  • to suggest a possible solution,
  • to demonstrate their use by applying them to a case study, and finally,
  • to explaining why they make a good solution.

The chosen case study, the TSG system, was delivered a few months ago, it is currently a 200,000+ lines-of-code project. This size makes it difficult to cover it completely. On the other hand, it is a real project with real problems, and it is, thus, a good source for many typical examples.

    1. Survey of Existing Work.

A thorough search of the literature has shown very little work addressing this problem. This was quite surprising due to the fact that UIRP is a well-known, widely adopted approach, which is discussed in numerous publications. The problem of capturing the information a prototype contains is almost completely disregarded. Comprehensive publications about prototyping such as the work of Lantz [7], Bischofberger and Pomberger [8], Pomberger [18], and Connell and Shafer [1] and [2], that have dealt in depth with this subject in their books completely ignore the problem. Two different approaches were found.

M. Hill [11] puts forward a method by which requirement statements can be attributed within models that allow for post-compilation extraction and analysis. This approach is based on adding a parasitic language to the modeling language. The parasitic language strongly couples requirement items to lines of code. This method offers some advantages over the well known method of incorporating comments into the prototype implementation. It focuses mainly on the functional aspects of the prototype. Therefore, it provides only a partial answer to the problem addressed by this research. As will be explained later, a UIP contains kinds of requirements information other than functionality that has to be extracted and stated.

Kösters, Six, and Voss [15] propose a requirement analysis method called FLUID, which explicitly captures the requirements of direct manipulation user interfaces. In this method, user interface requirements are described at a level of abstraction similar to that of conventional requirements. The combination of the models created by this method serves as a basis for further development. The method also provides for semi-automatic generation of user interface prototypes during the intermediate stages of analysis. This method focuses on OO modeling. It resembles in many ways the method proposed by Connell and Shafer [1]. It offers some additional models that can model specifically user-system interaction of direct-manipulation user interfaces. The authors focus on modeling user related functionality and OO analysis models. The UIP is regarded only as an executable model which helps visualize the current stage of analysis. They disregard other kinds of requirements information that a UIP contains and other kinds of knowledge that the UIP represents.

Both approaches address only part of the issues involved in UI prototyping and thus can provide, if at all, only a partial solution to the problem addressed by this thesis.

  1. Characteristics of Requirements Prototyping
    1. An Overview of Rapid Prototyping

RP is a well-known software engineering technique widely used by software engineers for more than a decade. It is mentioned in publications since the early 1980s [10].

The concept of RP was introduced at first as an attempt to deal with the main problems in the popularly used sequential approach to software development. The growing size and complexity of software systems made it almost impossible to obtain exact and complete requirement definition from a client [8]. When a sequential approach was used, errors and problems in the requirements definition frequently did not emerge until after the final product was used by the client. Experience has shown that validation, especially of the requirements definition, was usually not possible without experiments close to reality. Yet, this is exactly what was prevented by the sequential lifecycle. The implication of late discovery of erroneous or incomplete specifications and the evolutionary nature of software development were well understood. Researchers therefore concentrated on improving the specification and validation of the initial software requirements and successive evolution of software systems and introduced several prototyping paradigms.

Lantz [7], Bischofberger and Pomberger [8], Pomberger [18], and Connell and Shafer [1] and [2], among others, have dealt in depth with prototyping in their books.

Connell and Shafer [1] define the term prototype as follows:

A software prototype is a dynamic, interactive, visual model of user requirements as an implemented design for a proposed software system, providing a communication tool for customers and developers.

In these books, RP is described as an evolutionary approach to software development.

These books introduce the evolutionary RP process depicted in Figure 1. 

Figure 1: The Evolutionary Rapid Prototyping Process

Bischofberger [8] provides the following definitions:

A prototype is an easily modifiable and extensible working model of a proposed system not necessarily representative of the complete system, which provides later users of the application with a physical representation of key parts of the system before implementation.

A software prototype is a dynamic visual model providing a communication tool for customer and developer that is far more effective than either narrative prose or static visual models for prototyping functionality.

Prototyping covers all activities necessary to make a prototype available.

He distinguishes between different types of prototyping paradigms as well as different types of prototypes. The key factors he uses are the goals of prototyping and the properties of the prototype.

He classifies prototyping to the following paradigms:

  • Exploratory prototyping.
  • Experimental prototyping.
  • Evolutionary prototyping.

Exploratory prototyping is a requirements validation technique. The goal of exploratory prototyping is to explore and produce a requirements definition that is as complete as possible and that can be verified by the user on the basis of realistic examples.

Experimental prototyping is mainly a system design validation technique. The goal of experimental prototyping is to validate the system specification, to examine the system decomposition to subsystems, and to validate the subsystems specification.

Evolutionary prototyping is an incremental software development approach. The goal of evolutionary prototyping is incremental software development in which the initial prototype evolves to the deliverable system. Evolutionary prototyping is closely related to the spiral software lifecycle model introduced by Boehm [44] and as such it combines the paradigms of exploratory prototyping and experimental prototyping.

Bischofberger distinguishes between prototypes according to the following properties:

  • complete vs. incomplete
  • throw-away vs. reusable prototypes

A complete prototype implements all the significant capabilities of a system while an incomplete prototype implements only a portion of them, the portion of which specification and feasibility has to be examined.

A throw-away prototype is not reused in the production of the deliverable system while reusable one is. In some cases, the deliverable system simply evolves from the prototype.

When prototype-oriented software development process is coupled with a lifecycle model, prototyping is considered complementary and not an alternative.

Bischofberger introduce the prototyping-oriented lifecycle model depicted in Figure 2. The overlapping boxes in this figure indicate overlapping development phases. 

Figure 2: The Prototyping-Oriented Software Lifecycle

Figure 1 and Figure 2 reflect the iterative nature of the software prototyping lifecycle. All these prototyping models share a common substructure. They all consist of a number of recurring phases, which form one or more lifecycles within a containing lifecycle as depicted in Figure 3:

  1. Requirements definition
  2. Construction, correction, refinement, and enhancement
  3. Evaluation and validation, with user participation
 

Figure 3: Software Development Using the Prototyping Paradigm

Bischofberger also defines the product of the prototyping activity. The prototype and the requirements definition are the products of exploratory prototyping. The contents of the requirements definition is discussed vaguely, and the question about the right way to formalize and represent requirements which were elicited and gathered using the prototype is ignored.

      1. User Interface Prototyping

UI prototyping, a form of requirements prototyping, is an exploratory prototyping technique carried out during the requirements analysis phase of a project. It is considered a requirements elicitation and validation technique that supports requirements specification. It has a significant impact on the functional requirements of a system. It is agreed by many that, even if a prototype is incomplete, at least a system’s entire UI and some of the underlying functionality has to be implemented in a prototype.

The goal of UI prototyping is to discover user requirements through early implementation of the UI and the functionality behind it, so that users can relate to something tangible and to obtain a requirements definition that is as complete as possible, so that the system requirements can be validated by the user on the basis of realistic examples.

The main reasons why UI prototyping is used by software developers are:

  • It is considered an effective means to communicate with customers, far superior to text-based descriptions and paper models. It is believed that good customer-developer communication is absolutely indispensable in practice [3].
  • Users are familiar with UI systems. This familiarity helps users state requirements for UIs, operational modes and other capabilities. Sometimes the requirements can be stated by using terms common to such systems or by giving examples from other applications the customer uses. The UI can actually form a basis for the language users and developers use to communicate.
  • The availability of good RAD tools facilitates the development of fast, low-cost prototypes and supports an iterative prototype refinement process with the customer.
  • UI development environments promote the reuse of software components and application frameworks, which makes it easier to create a prototype from slightly modified existing software. Developers can inherit from the environment a common look and feel that helps users relate to the prototype.
  • UIPing can enhance product quality. User friendliness, specifically adequacy and learnability, which are considered important qualities of a system, can be achieved with the aid of a UIP. Correct user requirements are widely believed to be important in the development of successful interactive computer systems [4].
  • UIPing is also a risk reduction technique. It reduces the risk of incomplete or incorrect software specifications, both of which cause expensive correction activities if discovered late in the course of development. The later such problems are discovered the more expensive they are to fix. What prototypes do best is invalidate pre-specified requirements by uncovering all the user-developer misunderstandings [1].
 

The basic concept of UI prototyping is to

  • start with what you know about the system, which can be very little or very wrong,
  • create the first prototype version covering mainly the man-machine interface (MMI) and some underlying functionality,
  • show it to the customers and users,
  • conduct some interviews,
  • get their remarks and try to learn more about the system to be developed, create the next version based on the first one and on the knowledge gained,
  • show it to the customers and users, get their remarks,
  • make sure the changes made reflect properly what was discovered in the previous step,
  • conduct more interviews, and so on.

This approach forms a mini lifecycle, within the over-all software development lifecycle, that consist of the phases of analysis, definition, construction, testing, and validation, as shown in Figure 4. 

Figure 4: User Interface Prototyping

While UIPing facilitates the process of requirement specification, it can also lead to the difficulties described in Section 1.2.

    1. Characteristics of Prototype-Oriented Requirements Analysis

The first step of finding a solution to the difficulties of UIPing is to characterize the prototype-oriented requirements analysis process. This step is important because only by doing so we can learn more about the desired solution and because any practical solution has to suit the process.

      1. The Independence of a Development Process or a Method

To start with, a distinction is needed between the process with its underlying lifecycle model (e.g. waterfall model, evolutionary model) and between the method and notational forms (e.g. structured, object-oriented (OO) analysis and design). There are many possible combinations of models and methods. For instance a software development process based on an evolutionary approach can be applied both to structured and to OO projects.

Software development processes deal mainly with defining phases, milestones, activities that have to be performed, and process by-products, while methods are mainly concerned with notational forms, data flow diagrams, static models, state machines, etc., and how to produce and use them.

As shown in Figure 1 and Figure 2, prototyping can be combined with other approaches. The prototype construction activity forms a subprocess within the larger process of a software project. Some regard prototyping as a method in its own right, while others regard it as an extension to existing methods. As an extension it can be applied to many paradigms. UIPing can be applied in all cases. Furthermore, the modeling method is also irrelevant. All processes include at least one requirement specification phase. In our view, the specification phase can be independent of the design and implementation techniques that might be applied in subsequent phases because its purpose is to figure out what is required and why, and not how the requirements will be attained. A UIP is a requirement model. All systems have requirements, and it really does not matter if it is OO or structured. Each can have a UI. Therefore, from the UI’s point of view as well as from the user’s point of view, how the system is built can be considered a hidden implementation detail. Most users I have met consider how the system is built to be a hidden implementation detail.

Connell and Shafer distinguish between structured RP [2] and OO RP [1]. They define a slightly different process for each paradigm. The differences result from the way the prototype is constructed and from the modeling paradigm and not from the essence of requirements prototyping. The problems mentioned in Section 1.2 are common to both, because whatever paradigm the developers choose, eventually it must define software requirements and make sure that information is not lost.

The primary goals of the analysis and definition phase are to identify and document which activities will be computer aided, to define the software requirements, to delimit the system, to produce a detailed schedule, and to form a basis for design and implementation. These goals define the expected activity products. Requirement prototyping can directly or indirectly assist in achieving these goals. The meaning of the problem is that, in fact, some of the goals were not achieved.

      1. The Importance of Tools

UIRP is tool dependent. The success of the process relies on the availability of a proper development tool. The nature of the process requires numerous fast developments of the prototype, and this is difficult to achieve using traditional programming methods, with even a third-generation programming language.

Many publications emphasize the importance of tools [8]. Suitable tools are defined by specifying a list of requirements for the tool, by giving a list of criteria for evaluating the tool, or by giving an example of a good tool. Bischofberger [8], for instance, describes at least one tool for each of the prototyping paradigms. He dedicates the second half of his book to a prototyping tool set called TOPOS. Tools can provide an application framework which can be used as a base for prototype construction, MMI builders, database builders, a variety of reusable software component, and a high level programming language to tailor or enhance these components and to implement the application-specific functionality.

In some cases, the prototype is considered a preliminary model of the system, and eventually the prototype becomes the system, as suggested by Kenneth Lantz [7]. This kind of prototyping is known as evolutionary prototyping in later work [8].

Experience has shown that the choice of tool imposes a design technique on the construction of the prototype. The more you get from the tool, the more you become committed. It is highly recommended to follow the guidelines of this technique; otherwise, developers most likely find themselves fighting the tool instead of enjoying its capabilities. This fact makes choosing the right tool a very important step in the development process, but it also reveals some important information that might assist in finding a solution. A good tool basically fulfills developers’ expectations by helping them model the important aspects of their system, aspects which also represent the kind of information the prototype contains.

Part of the solution described by this research was formed by identifying the kind of knowledge prototyping tools help model. For instance, if a prototyping tool is known to provide good support for modeling application functionality, specifically application behavior in response to user operations, and for implementing this behavior and easily modifying it afterwards, then this is the kind of information a UIP contains. Otherwise, modeling aids are not needed. This issue will be discussed in greater detail in subsequent sections.

      1. Distinguishing Between Requirements and Design Details

Traditional approaches to software development urge us to distinguish between what is required and how to do it, and recommend to concentrate only on the what during the requirements specification phase. This is not the case for RP. Since we deal with early implementation, it is natural to mix requirements details and design details. Therefore, we do not have to separate out the what, until we have to state the requirements in a document. When the prototype is thrown away, its design is thrown away as well, and the developers start from scratch. When the prototype is not thrown away, it is necessary to deal with all the how details from the start. This problem is not specific to RP. The same dilemmas exist in other methods. RP even helps simplify this problem because it is a requirements elicitation technique which leads to more correct requirements definitions, and correct requirements can form a basis for a good design.

The implications of building a requirements prototype are that developers deal with design and implementation issues very early in the project. In OO prototyping, in which the line separating requirements and top-level design is very fuzzy, developers normally deal with objects that model the problem domain from the start. Early implementation helps discover information pertinent to future design, particularly if the prototype is thrown away. Among this information is that about design constraints and possible operational modes of the system. This kind of information is sometimes unjustly disregarded or remains undocumented because developers tend to specify information about what is required and ignore information about how to do it. The existence of such information is natural because building the prototype give developers the opportunity to experience the problem domain by implementing a partial solution, and because UI definition down to the level of UI components is partially a specification activity and partially a design activity. Design information has to be extracted from the code generated with the aid of the prototyping tool.

      1. Throw-Away Prototypes versus Reusable Prototypes

The decision whether the prototype will be thrown away or reused is a decision that prototype developers have to face. The decision greatly effects the development process in several ways.

Not throwing away means taking the evolutionary approach. When not throwing away,

  • the development process will probably resemble the model illustrated in Figure 1,
  • choosing the right tool is crucial for performance, adequacy, suitability for the target system, etc.,
  • developers deal with deliverable system design issues from the start,
  • other models of the system can be developed concurrently with the prototype; the requirement are stated in small steps instead of all at once to complete the milestone of requirements specification,
  • specification of the requirements elicited using the prototype can be carried out incrementally by updating other requirements models in pieces,
  • design documents are produced, as well as specification documents,
  • validation is simpler, because it is done continually as a part of the iterative process with the customer and users; in fact when the prototype becomes the system, acceptance tests (AT) are not necessary because the approval of the last version is the AT.

Some comments are necessary:

Which models to create depends on the chosen tool, and the method adopted by the developers. Applications from different domains require different methods. The final products of the evolutionary approach resemble the products of the sequential lifecycle approach, i.e. a working system, a requirements document, a design document, etc.

When throwing away,

  • choosing a tool is important, but the requirements for the tool varies. It should support rapid development, and enable the developers to produce the required functionality. We used Visual Basic running under Windows 3.11 on a 486 machine for the TSG prototype, and C++ and class libraries from various vendors running under Windows NT on a multi-processor machine for the final TSG system.
  • developers deal only with prototype design issues and not with the final product design, because a poor prototype design complicates the iterative process. The quality of the implementation is less important if the prototype is thrown away. This approach serves very well to separate the what from the how, if you believe this is a worthy concept.
  • other models of the system can be developed concurrently with the prototype, or before completing the milestone of requirements specification.
  • different methods can be chosen for the requirements modeling, for the design, and for the implementation of the final product. Structured analysis was used for the TSG system requirements specification because doing so was the department’s standard, and OO was used for the design and implementation because it seemed more natural for C++ and Windows-NT. This capability has the disadvantage of losing the ability to move from requirements to design in a systematic fashion and the advantage of the tools and development environment, both hardware and software, to be chosen only after one has a very good understanding of the system requirements.
  • the requirements elicited using the prototype can be stated by creating other requirements models or by the prototype itself. There still remains the question of how to do it, and this question is at the heart of this research.
  • documenting the prototype is very important, because the resulting documents are used as a basis for the final system’s development and testing.
  • mostly requirements documents are produced.
  • the final product must be tested, and the prototype may be used to provide the required outputs for comparison with those of the final product.

In the beginning, it seemed like these differences may change the issues addressed by this research, maybe even requiring different solutions. However, after looking into the subject, I came to the conclusion that they do not. Some time during the first development process steps, the developers have to deal with the most important question, “to throw away or not to throw away the prototype”. The choice may influence the evolution of the process and its products but the problem exists for both choices. Basically the same models can be used for both approaches. The main difference is that for throw-away prototypes, there is a point in time at which the developers have to stop refining the prototype, evaluate the specification they created and determine whether it forms a sufficient basis for further development, and if the answer to this question is “yes”, start to design and implement the system from scratch as in a standard sequential development process. On the other hand, for evolutionary prototypes, both the specification and design are updated and refined continually. The products of the process are used mainly as documentation of the system and is an additional means to confirm that the requirements and design meet the expectation of the customers. Because what eventually will become the final system is always available, the models are completed, if ever, only when the system is done. Each method forms an iterative process which eventually leads to better, more correct, and more complete system specifications, and sometimes to the product itself.

I choose not to address this question any further, even thought it seems very significant, because is a very broad subject requiring research in its own right, and because it seems unrelated to the main issue of this thesis.

      1. The Use of Formal Models

Two important goals of requirements analysis are to resolve misunderstandings with the customers and to evaluate the quality of requirements elicited. Sometimes a formal review is conducted at the end of the requirement specification phase. Prior to the review, requirements documents are submitted to the customer. During the review, the requirements are presented to the customer with the aid of formal models. At the end of this review, the requirements are baselined and the customers are expected to approve, by signing the documents, that this is what they asked for. From my past experience, many misunderstandings are not discovered in this review because the requirements are presented to the customers in a non-intuitive way that is difficult for them to understand. The question is, “to what extent should we use formality?”.

First, a clarification is necessary for the term “formal”. The following definition is taken from the introduction of the special issue of IEEE Transactions on Software Engineering about formal methods in software practice [42].

The term formal methods normally denotes software development and analysis activities that entail a degree of mathematical rigor. At one extreme, you may argue that anyone who builds a software system applies formal methods during the process. At the other extreme, you may deem a method formal only if it employs mathematical symbols and produces written proofs. Most people prefer to define formal methods in a manner closer to the latter extreme.

In this thesis, I mean “formal” in a manner closer to the first extreme. Experience has shown that one of the sure ways to lose any dialogue with the TSG users was to show them a mathematical definition of their requirements. They even had difficulties with data flow diagrams, and static data models. Moreover, in order to write a formal analysis model for a system, it is necessary to know the system requirements. Otherwise, proofs will be generated for the wrong specifications. Prototyping is about finding the correct requirements and representing them in a manner that everyone in the project fully understands. For formal methods as defined by the later extreme, this is seldom the case, especially when people with no computer science or mathematical background are involved. I believe that formality in the latter sense can be used later in the project if necessary after giving it proper consideration. UIPing, on the other hand, is usually carried out in early phases when requirements are neither known nor clear. In these phases, UIPing is more adequate [13].

      1. The Customer-Developer Communication Gap

One of the most difficult problems of requirement elicitation is to bridge the communication gap with the customers. Communication among all parties involved sounds good on paper and is absolutely indispensable in practice [3]. The prototype gives the customer an opportunity to visualize how the system will actually function and react to various stimuli. It serves as the medium for an ongoing dialog with the customer [16]. Bowers and Pycock [4] state that requirements are produced as requirements in and through user-developer interaction.

Improved customer-developer communication, and early requirements validation are probably the most important benefits of UIPing, particularly for custom projects [12]. In order to achieve these benefits, a means is needed that supports and facilitates communication. Prototyping as a joint design activity is characterized as a meeting between two languages, that of the developer and that of the user’s work world (also called the user’s universe of discourse [36]). Requirements have to be presented in such a way that the customer will be able to say, “that is what I want” or at least “that is what I think I want, for now”. Software developers, on the other hand, need to use some level of formality in order to create models. All methods use formal or semi-formal models to represent requirements.

      1. The Expected Products of Requirements Prototyping and the Various Customers of these Products

Bischofberger [8] describes the prototype and the requirements definition as the products of exploratory prototyping. The contents of the requirements definition is discussed vaguely, and the question about the right way to formalize and present the requirements that were elicited and presented using the prototype is ignored. Others give a list of topics, similar to a table of contents for the requirements specification document, from which it is possible to deduce which aspects of the system have to be specified. This list details the kind of information a UIP might say about a system. This way we can look at the problem from a different perspective, namely what all has to be said about a system in order to specify it properly and which of the things that has to be said is said by the prototype.

When looking for a way to represent information elicited with the aid of the prototype we have to identify the primary consumers of the analysis and specification activity products, i.e. the people who will read the documentation we produce and experiment with the prototype. These are

  • the client representatives and the intended users of the software, who participate in the prototype construction and who will eventually approve the requirements documents.
  • members of the software development team who will base the design, implementation, and testing of the product on the requirements. We have to keep in mind that members might join the team long after the specification phase ends and they need to learn about the system from available sources of information. Documentation contributes to communication between developers during the development phases.
  • personnel who will maintain and possibly enhance the product in the near and far future, maybe even after the original team members no longer work in the company. This is a typical case at Rafael because a system can last 15 years or even more, and that is an eternity in software, hardware, and development personnel terms.

The products of the requirement specification has to address these consumers. The information has to be represented in a manner readable and usable for the present and future consumers. This fact means that some of the information might be presented in more than one way because what seems to be readable to an experienced software engineer might not be for the customer. For instance, analysts can handle use-cases and scenarios1 to model user-system interactions and state-machines to model system behavior in response to user operations. These two modeling techniques represent the same information in two different ways that express two possible perspectives with which to look at the same knowledge, perhaps on different levels of abstraction. Customers tend to think about the ways they will use the system to perform specific tasks and about the ways the system will help them accomplish them, while developers think about the functional model that will make performing these tasks possible. Furthermore, it might be that information that is represented in one model cannot be represented in the other. For instance, if a scenario includes some operation which is not performed with the aid of the system, such as manual setup and preparation, the functional model will not reflect this operation even though their description as part of a complete sequence of operations can explain the reasoning behind the entire functional model. This kind of information is destined to be lost or remain undocumented in a state machine specification.

Thus, modeling and prototyping should not be documentation driven, but rather responsive to a set of evolving needs and expectations that are communicated visually and repeatedly [3].

      1. How to Produce a Prototype

A UI prototype should be produced efficiently, quickly, at low cost. It is believed that a systematic approach to prototype construction, which defines what to do and how to do it, can help accomplish these goals and facilitates producing specification documents.

Building prototypes for some application domains seems to be simpler than for others. For instance, for information systems, the component types of an information system are well defined, i.e. a database, data entry screens, data displays, data processing functions, reports, etc. Realizing this fact makes it simpler to define a systematic way to build a prototype and write a requirements specification document:

  • Define the data and their source and destination.
  • Define the required data processing.
  • Define the dialogs.
  • Define the reports structure, etc.

Then, moving to the final system is straightforward. The components can be moved one by one.

Lantz’s method, described in [7], mostly handles information system prototyping. It is domain specific, but some of the concepts mentioned there make sense for other application domains as well, for example, when defining application data and data structures for many types of applications. Bischofberger [8] divides his overview of prototyping to several paradigms and concepts, each suitable for a different domain. He pays special attention to information systems prototyping. He identifies two especially promising approaches: prototyping with fourth generation systems, and prototyping with hypertext systems, and explains how to prototype with such systems. Lantz was able to define a prototyping method for information systems more than 12 years ago, and Bischofberger can explain how and what to prototype with fourth generation systems, as depicted in Figure 5, because this generalized understanding of the information system’s domain already exists. It also explains why good information system RAD tools have been commercially available for several years. The main components of an information systems define the features of the RAD tool, namely a data modeler, a UI builder, a report generator, a high-level algorithmic language, etc. 

Figure 5: Prototyping Information Systems with Fourth Generation Languages

OO prototyping is much fuzzier, abstract, and not application-domain specific. Connell and Shafer [1] tell us to look for objects in the problem statement. They give us a clue where to start, in the system interfaces, but the rest of the work is left for us. Books about OO always include a section about where objects come from, which can help. Moreover, many lectures about abstract data types (ADTs), which are the ancestors or at least close relatives of objects, tell us that after gaining the experience, the ADTs simply pop out of the text. This is true. The more you are used to OO, the more you recognize a good object when you see one, but it is not as systematic and straightforward as defining reports and dialogs. Even the name implies that it is abstract but not in the same sense of the word.

How to prototype, as explained earlier, is more or less determined by the tool we choose. Some comment is still necessary. It seems that some construction techniques are more suitable for prototyping. Component-oriented development tools, and component-oriented software development is the nearest we can get to what engineers from other disciplines use and do when they build a prototype, especially when we deal with UI. I refer to development environments such as Microsoft® Visual BASIC™, which come with an extensive and extendible set of components, which can be used to rapidly develop an application. Developers can enhance this set by adding their own components or by buying them from a third party. The developers of a prototype can select components from the set, connect and assemble them, set their properties, and build a preliminary working model of the system under development, like hardware engineers do when they build a board prototype. OO prototyping shares the same properties. A developer can inherit the basic functionality from a reusable set of classes, which may include even an application framework, and enhance it through inheritance and polymorphism. The choice of reused software components, a RAD tool, or an application framework generates plenty of implicit information about the system, that is already specified by the choices we make. For instance, the choice of a windowing system forces some conventions and constraints on UI design. The functionality of a pull-down menu or of the file open dialog box does not have to be respecified or documented; only its contents and use have to be described. What has to be documented is the rationale, in this case the rationale for choosing files to store application data from among all the possible ways to do it.

Linthicum [9] states that RAD is a powerful tool when used in conjunction with a rigorous application design process. First, RAD works only when you understand the business problem the application must solve. It is effective only when it is a part of a sound application development process. He advises us to use a prototype to supplement the application design, not to replace it.

The question is whether there is a general approach, a cookbook approach that can be applied to all kinds of UIPs. I think that the answer is probably not. Nevertheless, it is possible to define guidelines that might assist in forming a systematic approach to prototype construction for a given system. The key problems are to identify to which application domains the system belong, what are the principal properties, the main features of an application belonging to these domains, and which of these properties is pertinent to the system. An application may belong to more than one domain, e.g. it might be UI-intensive, data driven, and real time. Therefore, sometimes it is better to use more than one modeling paradigm and more than one construction technique when building a prototype, one for modeling data and data processing, one for UI modeling, one for modeling real-time behavior, etc. A systematic approach to prototype construction can form a base to a fully traced requirements process in which it is decided and documented ahead of time what aspects are being modeled in the prototype. Furthermore, it provides a partial answer to the question “what does a prototype say?”

      1. Kinds of Information a Requirements Prototype Contains

RAD tools, which are used to produce a prototype, usually represent a practical implementation of an abstract concept in software construction. Most tools I have used adhere completely to neither a specific method nor a specific notation or modeling technique. Methods, on the other hand, do not support specific tools. Occasionally, a computer-aided software engineering (CASE) tool is available that supports a method, and if we are lucky, even automatically generates code which is compatible with some development environment. Part of the difficulty involved in moving from a tool to a method and back results from this rarity. The prototype, as a requirements model, contains information that is pertinent to other requirements models that are used by the method and vice versa. The problem is somewhat ameliorated if the models are created and updated first and then the prototype is constructed and refined, as in a traditional development. Doing it inversely resembles a reverse engineering process.

This problem is encountered when looking for a method to capture the information that the prototype contains and to represent it in the analysis documents. In this case, the key problem is to identify the kind of information a prototype contains, e.g., functional behavior, and then to look for a method that model this information properly. The prototype contains this information regardless of the tool or the modeling technique. Its sources are the customer’s needs and the problem domain.

      1. Conclusions

Requirements prototyping is a valuable approach for dealing with specification difficulties due to uncertainties and communication problems with customers or users or for dealing with an unfamiliar domain or an application of a specific kind for the first time. A prototype gives developers an opportunity to get a better understanding before correctly specifying the problem, that is, writing the set of requirements, improving customer-developer communication, and helping to resolve misunderstandings. Developers sometimes need to play with a problem before they can actually understand it. We have to consider the fact that in the beginning, developers do not know exactly what they are doing, and customers do not know what they want or cannot say what they need, and that is why an efficiently, quick, low-cost approach to prototype construction is needed.

The prototype is constructed, most probably with the aid of a tool, in an evolutionary manner through continual refinement and enhancements. The tools, method, and software development process we choose, and the way we choose to combine the requirement prototyping activity with all other software development activities do not alter the kind of information it contains. Once the prototype is developed and agreed upon, we have to capture this information and represent it in the system specification documents in a manner that is readable and useful to all parties involved.

A prototype is more than a requirement validation tool or an aid in the requirements elicitation process that improves customer developer communication. It is also a requirements model complementing the other requirements models that contain valuable information about the system. A method is needed to capture this information and represent it in the system specification documents. Sometimes, we can save the need to model requirements twice by the prototype and in the requirements specification document. On the other hand, we have to avoid losing information about the system. Redundancy is acceptable when it represents multiple viewpoints of the same information. However, there is the danger of inconsistency between the viewpoints.

The key problems to finding a method to address these needs are:

  • to define a systematic approach to prototype construction,
  • to identify the kind of information the requirements prototype contains, and
  • to choose models that model this information properly.

It is important to note that modeling and prototyping process should not be documentation driven but rather responsive to a set of evolving needs and expectations that are communicated visually and repeatedly [3].

  1. The Proposed Solution

The UIP-oriented requirements specification process consists of the following recurring steps.

  1. Gather preliminary information consisting of customers’ requirements, users’ needs, etc.
  2. Plan the construction of the prototype.
  3. Build or enhance the prototype. Evaluate the prototype with the customer, and use his or her reaction to the prototype to elicit new requirements.
  4. Validate the requirements.
  5. Extract requirements from the prototype, if necessary. If the prototype is used also as a requirements model, document it.
  6. Specify the requirements.
  7. Produce the necessary requirements specification documents.

The approach proposed by this thesis addresses two aspects of this process, the prototype construction activity, Steps 2 and 3, and the way to state the requirements that a UIP contains, Step 6. The characteristics of a prototype-oriented requirements specification process that are relevant to the problem addressed by this thesis were identified in Section 2. The next steps are to identify the kind of information, both explicit and implicit, that a UIP contains and to suggest a way to capture and represent this information in the analysis products.

The general notion of kind of information is somewhat abstract. Binding the kind of information identified to a systematic approach of prototype construction, or in other words, discovering what the prototype must do, forms a more practical solution. Therefore, it is necessary first to define guidelines that can be applied in order to make prototype construction more systematic.

Some comments are necessary.

A requirements prototype serves two purposes. It is, first, an effective tool for requirements elicitation and validation, and, second, a requirements model itself [11]. Therefore, some requirements are elicited with the aid of the prototype while other requirements are stated by the prototype. Requirement validation is an issue all by itself. Prototyping is also an approach for validating requirements, which has some advantages and some disadvantages, if further validation is required, beyond what prototyping has to offer, it is possible to state the requirements in a way that supports further validation.

In addition to the classification of prototyping paradigms introduced by Bischofberger, there exists yet another that is pertinent to the process of prototype construction. There are three iterative approaches.

  1. Identify new requirements or refine existing ones. Create, update, and enhance the requirements models. Implement them in a prototype. Evaluate them, and so on.
  2. Identify new requirements or refine existing ones. Implement them in the prototype. Evaluate the requirements. If necessary, create, update, and enhance the requirements models, and so on.
  3. Use a prototyping CASE tool.

The first and second approaches look somewhat similar. The main difference between them is the order in which things are done. They represent two extremes, the ordered approach, which uses prototype development as an ordinary development process, and the quick and sometimes dirty approach.

The CASE tool approach is slightly different. It is based on an existing prototyping tool, which supports a specific prototyping method, and therefore, enforces an order to the various activities. I refer to tools like TOPOS [8] and PSDL introduced by Luqi in [20] and [21]. There are tools that can semi-automatically produce a prototype from a system’s description in a high-level prototyping language [15].

There is also the question of which requirements documentation to produce. In this case, the extremes are producing a full set according to the developing organization's standards and documenting only complementary and non-functional requirements that are not captured by the prototype.

The day-to-day prototyping reality is somewhere in the middle between these extremes. We usually choose some form of an evolutionary software development process, and even if we do not choose it, the development is evolutionary anyway. When necessary, we develop a prototype, which in most cases is reused and sometimes is thrown away. We do not have a specific prototyping tool because we can never find an appropriate one, i.e., a tool that runs under the operating systems and platforms we use, that supports the intended programming language, that integrates well with other products we use, that has proper technical support and reasonable documentation, that produces specification and design documents in a format compatible to the organization’s standards, that has a reasonable learning curve, and, last but not least, that is suitable for the kind of application we are about to develop. Therefore, we use the second best choice, which is the commercially available RAD tools or the application frameworks that we know. Moreover, we usually have to produce documentation, because we are committed to some kind of development standard such as ISO 9000.3 or some tailored DOD 2167A, which defines a minimal set of documentation to be produced.

The approach proposed in this thesis is intended to deal with this reality, which I believe is common to many development organizations. Nevertheless, it would be nice to find a tool that fulfills the expectations mentioned above and supports the approach that is described in the next sections.

    1. Building a Prototype Systematically

The first goal is a systematic approach to prototype construction. As argued in Section 2.2.8, I believe there is no general approach that can be applied to all kinds of UIPs, but it is possible to define guidelines that might constitute a systematic approach to constructing prototypes for a given system. The aim is to define a process that resembles the prototyping process for information systems illustrated in Figure 5.

For a given system, the prototyping process consists of six recurring steps,

  1. Define the system’s operational environment and its interfaces to other systems.
  2. Identify to which application domains the system belongs.
  3. Characterize the principal properties and the main features of an application belonging to these domains.
  4. Identify which of these properties are applicable to the system under development.
  5. Decide which of the identified properties requirements will be prototyped.
  6. Prototype the system’s interfaces and chosen properties.

The following sections explain each of these steps in details.

      1. Defining the System’s Operational Environment

In many software requirements modeling methods, the top most view of a system’s requirements model is a diagram that presents the system’s operational environment, the various entities, also known as agents or actors, that interact with the system, and the system’s interfaces to these entities. If the software system is a component of a larger system, this diagram will most probably be part of the design of the larger system. Some of these entities are the system’s users. Users are divided to groups according to their roles, for instance, ordinary users, privileged users, system administrators, maintenance personnel etc. Each type of user may have a set of goals they want to fulfill with the aid of the system, and a variety of ways the system can serve them. The first step in building a UIP is to identify the various user roles, categorize them, and define for each the possible services that the system will supply to them, and the intended use-cases that concern them. Groups of users can naturally share services and use-cases. The list of services and use-cases is very dynamic at first. New requirements, which are discovered, may add, delete, or alter items to, from, or in this list. These items represent functional properties of the system, among which some or all will be prototyped.

A use-case is a broad-stroke description of how the system will be used. It provides a high-level view of the intended functionality [23].

Scenarios are defined as instances of use-cases in [23]. This definition is somewhat counter intuitive. The following explanation complements this definition. Use-cases represent services the system can provide or capabilities the system possesses. Scenarios represent ways to utilize these services, or in other words, to exploit these capabilities.

      1. Identifying to Which Application Domains the System Belongs

The second step is to identify to which application domains the system belongs. An application domain usually defines the principal properties, the main features, and the primary components of an application belonging to the domain. For instance, for an information system, these may include a database, a human-machine interface, forms for feeding information, forms for entering queries and viewing the results, printed reports, a set of consistency rules, and data processing algorithms.

An application may belong to multiple domains, e.g. UI-intensive, database, real-time, communication intensive, etc. The basic idea is to try to classify the application and to group the application requirements according to domains.

For each identified domain, there may exist domain-specific methods, which are usually systematic when dealing with requirements. These methods can be applied to the related portion of the prototype and the resulting requirements specification. In fact, for most identifyed domains, it is possible do define a prototype construction process that is identical to the construction process for information system that is illustarted in Figure 5.

      1. Specifying the Application Domain’s Principal Properties and Main Features

The next step is to specify the principal properties, the main features, and the primary components that applications belonging to these domains possess. This step calls for performing some form of domain analysis. Here, the basic idea is to look for kinds of requirements, that is, what kind of things have to be said about the system in order to specify it properly. The belonging of a system to a domain introduces an entire set of requirements, common to most application belonging to the domain, of which some might be applicable to the system.

Typically the various domains are well known and well understood by the development team. Thus, the principal properties, the main features, and the primary components are known to the team members. This knowledge is part of the development organization knowledge-base. Domains such as the GUI domain, the data acquisitions and data analysis domain, and the information system domain, are well enough understood that it is no problem for soecifieres to identify the domain and their important properties and components. Indeed,  for these domains, there are complete libraries of functions to be used to build applications in these domains. When such knowledge does not exist, it can be obtained, at least partially, by performing a domain analysis, by reading related publication, and by learning from similar systems and from domain specific applications and domain specifc tools. The fact that a team has never developed a similar system in the past and thus lacks some necessary domain related knowledeg does not mean that such knowledge does not exist elsewhere and that it can not be obtained. It is better to learn about it from existing sources of information than to restate it in a different manner or, even worse, reinvent it.

For each domain, a list is generated of all the important properties and components identified. These lists form the basis for the next step.

      1. Identifying Properties Applicable to the System Under Development

After obtaining the lists, it is necessary to decide which of these properties are applicable to the system under development. Not all the identified properties are necessarily applicable to the system. For instance, a simulation system, such as the TSG system, is a real-time system. Yet, it manages some kind of a database, and therefore, it possesses properties of an information system. It is user interface intensive, but it is a data acquisition system as well. A data acquisition system might possess properties such as very high sampling rates per channel, gap-free data acquisition over a long sampling periods with real-time data storage to disk and more, but these might not be applicable for the developed simulation system. Figure 6 illustrates this concept. The list of identified properties should be combined with the lists from Step 1; the combination forms an over-all concise summary of the system’s intended functionality. 

Figure 6: Classification to Domains

The process of deciding is somewhat fuzzy. It is based on the knowledge we have about the domain, the application problem domain, gut feelings, past experience, cost considerations, schedule constraints, feedback from customers, and many other criteria. Moreover, since the requirements are not clear yet, we might make wrong choices which we will have to undo later or to make additional ones in the course of prototyping. Therefore, we will probably review this step more than once. Experience has shown that it is recommended to maintain the list of these properties, classify each item in the list according to priority for implementation and necessity, and document the consideration applied. Classification should grade properties as a must, nice to have, irrelevant, for future enhancement, rejected due to a design decision, or any other type of classification which seems to be relevant. When in doubt, it is recommended to rank requirements with the help of an existing priority ranking technique, for instance pair-wise comparison or numeral assignment, both described in [24]. For instance, the properties of a simulation system such as TSG include the ability to define, execute, and analyze IR scenarios. We decided that implementing an IR scenario builder is a must, it would be nice to make it possible to define IR scenarios using an auxiliary IR scenario builder as well. It has been decided that the system will support a limited set of predefined mathematical functions, which can be used to define IR scenario components, but in the future, a link to an external software tool for calculating advanced mathematical functions will be implemented. It has been decided that the system will include only basic capabilities for data analysis which will include mostly X versus Y plots and alphanumeric tables and that it is unnecessary, and even irrelevant, to display this data using pie-charts or bar-graphs. Advanced data analysis will be performed with the aid of other special-purpose software tools. Data will be exported from the system in a standard format which is readable by most commercially available tools. Therefore, the system will include a data export utility.

      1. Deciding Which Properties Requirements Will Be Prototyped

The previous steps are not specific to prototyping. They present a way for developers to start to get familiar with a new problem. After completing the previous step, developers are supposed to have enough information about the system in order to start prototyping. What is left to decide is which requirements identified in the previous step will be prototyped.

Not all identified properties are prototyped, particularly if a throwaway prototype is produced. There is no advise in this thesis on what to prototype and what not to prototype. The process of deciding what to prototype among a list of possible requirements is left to the developers involved. In this decision, they use criteria such as past experience, level of familiarity of the problem domain, level of uncertainty, risks involved, difficulties in requirements elicitation, implication on future design, availability of proper tools, estimation of the extent to which prototyping will produce valuable information, costs, team management consent, and more. As mentioned earlier, the choices made might be revised in the course of prototyping, after discovering more information and gaining a better understanding of the problem domain.

      1. Developing the System Prototype

Regardless of the prototyping approach and the chosen prototype construction method, now starts the process of requirements elicitation with the aid of the prototype. Up until this step, several kinds of requirements, which might be applicable to the system, were identified. What we have in fact is a structured list of properties which resembles a preliminary partial table of contents, or in other words a list of sections of the SRS document of the system. It consists of the properties identified in Step 1 and Step 4. Now the contents of these documents have to be written. One comment is necessary. All the properties identified, which are applicable to the system, should be documented in the requirements specification document even if they will not be prototyped.

Building the prototype is done in small iterative steps. Each iteration consists of at least a phase of implementation and a phase of refinement and evaluation.

Figure 7 illustrates the systematization of prototype construction. Each domain defines a subset of activities that have to be performed. From Figure 6 and Figure 7, it is possible to distinguish what properties may be shared by multiple domains, the obvious examples of which are user-interface properties. 

Figure 7: An Extended Model for Prototype Construction

This construction method is the kind of systematic approach that was introduced for information systems, which we tried to imitate for other kinds of systems. It is important to note that this approach leads us to identify kinds of requirements, or in other words, types of information to look for. It is not application specific. Therefore, we still have to elicit the system’s requirements. Moreover, the requirements specification we produce might be incomplete if we rely only on the prototype, because there are other kinds of requirements that are not represented either explicitly or implicitly by the UIP. These we will have to elicit with the aid of other methods.

    1. The kind of Information a User-Interface Prototype Contains

UI is common to many types of systems; it is one of the main reasons why graphic operating systems such as MS-WINDOWS®, and MAC-OS® are used for such a broad variety of applications. Each application that has a user interface shares properties which are attributed to the UI application domain. A UIP is primarily a requirements model; it reflects the portion of the system’s requirements that are observable by the system’s users. These requirements can be categorized to the following types,

  1. the functionality and behavior of the application, that is, its reactive nature, including constraints placed on this behavior and operational logic,
  2. the application’s data model, data dictionary, and data processing capabilities,
  3. a taxonomy of the application’s language and a dictionary of application-related terms,
  4. partial specification of the interfaces to other systems, and
  5. general knowledge about the system.

With the help of UIRP, the characteristics, i.e., the views, models, aspects, of each of the properties that were identified after performing the steps defined in Section 3.1 can be discovered, prototyped, and eventually specified. The prototype contains this information regardless of the prototyping paradigm, the tool used, and the modeling technique the developers choose. Its primary sources are the customer’s needs and the problem domain.

A UIP is a means for requirements elicitation as well, as such it also helps discover information that is not expressed directly by the system’s UI, such as the overall usage of the system and the system’s context within its environment.

The following sections explain each of these aspects in details.

      1. Application Functionality and Behavior

UIRP has a significant impact on the functional requirements of a system. It is agreed by many that even if a prototype is incomplete, at least a system’s UI and some of the underlying functionality has to be specified and implemented in a prototype. UIRP is extremely suitable for reactive systems, i.e., systems that react to events from various sources including users. Through the user interface, developers can specify the user-system interaction and the behavior of the system by describing the way the system reacts to user events. The prototype can even simulate events that normally will not be generated by the system’s user interface. The prototype is a realization of the abstract, and sometimes vague ideas users have about what they need from the system, about how the system will serve them in performing their work, and about what they expect the system to do or not do. The prototype, as a tangible model, is known to cause enhancement of the functional requirements, some times beyond what is really needed. This phenomenon is so significant; it is considered one of the hazards of requirements prototyping, that developers have to beware of [17].

Three perspectives are expressed in the course of prototyping,

1. the one of the users, who regard the system as a tool to use,

2. the one of the developers, who regard the system as a functional entity which performs services or tasks and reacts to external and internal events, and

3. the one of HCI people, if involved, who are concerned with attributes such as user friendliness, suitability to users’ needs, and integration with the users’ work environment and the users’ tasks.

The HCI perspective, although not always explicitly identified in the customer’s requirements document, is a source for many functional requirements, which would otherwise be discovered only later in the development process, after the first version of the system is delivered and the users gain some utilization experience. Each of these perspectives has to be reflected in the prototype and the accompanying requirements models.

One of the most common mistakes is to adopt a positive manner of thinking and look for information about what the system should do, about what should happen, and about what is allowed to happen, but not about what should not happen, what the system is not supposed to do, and about constraints. This mistake results from the users’ tendency to state mostly what they want and not what they do not want. The developers must look for this kind of information as well and strive to specify not only the “ifs” but also the “if nots”. Normally, a very large portion of the implementing code will deal with preventing undesired operations from happening, with error detection, exception handling, and error recovery, and by ensuring that specified constraints will be met. A lot of development effort will be invested in implementing and testing these requirements, because there are usually only a few ways by which things perform as expected, and so many more ways by which things go wrong. Moreover, users no longer accept messages such as “Severe error! Please contact your system manager” or “Execution aborted due to unexpected errors”. For some applications, the system must react to error conditions long before a system manager, or even a user, notices the problem. Discovering this kind of information, which is strongly related to the system’s functional specification, early is crucial because developers can plan how to meet these requirements and design mechanisms to handle future similar requirements. Late discovery will probably result in high implementation costs and messy code.

It is also recommended to explore more than one alternative to implement the required functionality, and to even alter the initial requirements if necessary, because sometimes what seems to be very natural to the users and very user friendly to human-engineering personnel might turn out to be a very expensive solution for developers and maybe selecting a different approach that is almost as natural and friendly will result in a much lower costs.

      1. A Data Model and A Data Dictionary.

The Model/View/Controller (MVC) design pattern [25] or Document/View, as Microsoft® people call it, for Graphical User Interface (GUI) systems can help explain the data model and data dictionary. This pattern advises the application developers to separate the application data from its views. Assuming that the data model is known, the role of the views is to display and to provide means for editing the application data in different and diverse ways. 

Figure 8: Document/View Design Pattern

In UIRP, the views are created first based on partial and preliminary knowledge about the system. The RAD tool facilitates the creation of displays without the need to implement the underlying data model2. From these views, the application data can be discovered and specified. The developers can elicit information about the source, representation, and processing for each data item viewed and/or modified in the display. Developers can track the flow of information from/to the user interface, to/from the application internal data structures, and from/to the internal data structures to/from other system interfaces as illustrated in Figure 9. 

Figure 9: The Flow of Information Through the System

Figure 9 unsurprisingly resembles a context diagram that represents the context of a system within its environment. This context is also called the universe of discourse (UoD) [36]. It also represents a top-level view of the flow of information from/to the system. Data processing requirements are discovered through tracking the information. Although data processing may be viewed as a kind of functionality, it is more reasonable to describe this aspect of the system’s functional requirements in conjunction with the data modeling information.

      1. The Taxonomy of the Application Language

The process of requirements elicitation involves communicating with the customer, discovering their needs, and getting familiar with the problem domain. In the course of this communication, a language is formed that includes a set of terms that is used to communicate among the stakeholders and to communicate between the system and its users. These terms form a lexicon. The terms from the lexicon are used to discuss and state requirements, and eventually find their way to the user interface, to the requirements and design documentation, and therefore, to the models created, to the user’s manual, and even to the variable names in the source code. It is important to note that, when a GUI environment is used to develop the prototype, the taxonomy includes not only textual entities but also visual entities such as icons. Figure 10, which is a refinement of Figure 8, illustrates this link. 

Figure 10: The Link Between the Lexicon, the User Interface, and the Data Model

It is important to identify and define, as soon as possible during analysis, a set of terms which is accepted by all the people involved. It includes the nouns and verbs that are names for data elements, entities, operations, events, and tools, and an explanation of the meaning of each.

Terms come from various sources,

  • the application domain, e.g., the terms “IR scenario”, “field of view”, “radiance”, and “intensity”, from the electro-optical simulation systems domain,
  • other application domains that the system belong to, e.g., the terms “zoom”, “scroll”, “sampling rate”, and “linear scaling” from the data acquisition systems domain,
  • the users work environment3, e.g., the terms “open-loop simulation”, “pitch”, and “yaw” from the optical alignment domain,
  • the application framework and the execution environment, e.g., the terms “open”, “save as”, “tile”, “cascade”, “cut”, “copy”, and “paste” from the window-based applications domain, and
  • terms invented by the developers while forming the concepts on which the developed system will be based.

It is very hard to define a good taxonomy, as it is hard to find good names for variables and functions in code. When the chosen terms are intuitive and natural to the users, the system becomes more user-friendly and easier to learn. The existence of such a set is one of the major contributors to the property of common look and feel, which makes the Apple Macintosh and its operating system so popular.

After defining the set, the developers must go and look thoroughly for duplicates and misuse of terms in the customer’s requirements specification and in the analysis products.

A UIP helps discover this kind of information in two ways. As a requirements model, it contains a subset of the terms in the dictionary in its menus, data entry forms, dialogs, and other views, and its functionality is described using these terms. As a requirements elicitation aid, the UIP is used to conduct interviews with the customer, to describe and demonstrate possible use-cases, and to resolve misunderstandings or disagreements. In both cases, all the people involved can relate to the prototype and experiment with the prototype using terms from the lexicon.

      1. Partial Specification of Interfaces with Other Systems

As illustrated in Figure 9, other interfaces can be discovered by tracking information through the system using information flow. For each data element, it is necessary to inquire from where it comes and to where it goes. For some elements, the tracks come from or lead to outside the system. For these elements, one asks questions similar to those asked for elements of the data model. These questions help to know and, afterwards, to define the system’s boundaries. Specification is only partial, because some of the information does not find its way to the UI. It may be hidden or implicit, and the UI is not aware, for example, of the actual hardware used to convey the data.

Defining the software system’s boundaries is related to overall system design, and therefore usually performed prior to software requirements specification. The top-level view of a structured requirements model, for example, is a context diagram. Connel and Shafer recommend starting modeling requirements with a context diagram for structured prototyping [2], and with a source-sink diagram for object-oriented prototyping [1]. Leite [36] calls the operational environment of a software system the system’s UoD. He assumes that in order to define an application specific language, first the application’s UoD should be defined.

When an application belongs to more than one application domain, there is always the dilemma of whether to implement a full set of capabilities that a domain specific tool possesses, and in a way to reinvent the wheel, or to implement only a basic set of capabilities and provide some form of connectivity to other software tools that provide the remaining capabilities. This decision is made during the activities described in Section 3.1.4. For instance, when a system is required to have data analysis capabilities, developers might have to decide whether to implement a full set of capabilities which may include advanced display types, mathematical and statistical data processing functions, and support for multiple data storage file formats, or to implement only a partial set of simple display graphs, one file format, and a capability to export data from the application to other software tools. Choosing the second option will often result in having to define a new system interface that initially did not exist in the problem domain and a new set of operational requirements concerning the data export capability, all of which will be discovered through the introduction of additional use-cases, scenarios, and UI elements.

Prototyping helps define and validate the UoD. It is important to set the system limits, and some times even invent them if they initially do not exist in the application domain. It is important to define where the system ends and other systems begin. Here, a UIP serves mostly as a requirements elicitation and requirements validation aid.

      1. General Knowledge About the System

During the prototype creation, there is a lot of valuable information that does not fall into any category and that is not expressed directly by any of the models mentioned in the previous sections. This information includes

  • the current state, i.e. before the system existed,
  • the user’s work environment,
  • the user’s work procedures,
  • the application domain in general,
  • the design concepts,
  • the design goals,
  • the specification and design constraints,
  • the assumptions made on which the specification is based,
  • limitations,
  • possible tradeoffs,
  • design options,
  • decisions regarding human engineering,
  • the operational concept,
  • why things were done one way and not the other,
  • how to use the system i.e. a description which connects all the use-cases to the operational scheme, and
  • development artifacts.

This information is either expressed implicitly by the prototype or written in transcripts produced during work sessions and discussions with the users, along with other unrelated information. Macaulay [14] suggests that six areas of knowledge and understanding are needed before system development begins. They are summarized in Table 1. Prototyping is considered a technique for developing knowledge about categories 2, 5, and 6. All of these categories fit the definition of general knowledge listed above. It is clear that this information is destined to be forgotten as time goes by unless it is documented somewhere. Most of the information that was not written in any formal document in the TSG project was of this kind.

Table 1: The Six Areas of Knowledge that are Needed Before System Development Begins


Areas of Knowledge Abstract Knowledge Concrete Knowledge
Users’ Present Work 1

Relevant structures on users’ present work

4

Concrete experience with users’ present work

New System 2

Visions and design proposals

5

Concrete experience with new system

Technological Options 3

Overview of technological options

6

Concrete experience with technological options

Leveson in [19] proposes an approach, called intent specification, for writing software specification. Intent specification provides a way of coping with the complexity of the cognitive demands on the builders and maintainers of automated systems by basing specification on means-ends as well as part-whole abstractions. She defines a hierarchy of five levels of intent which are related to each other in a means-ends relation, where each level provides intent, the “why”, information about the level below. Publications about software engineering tend to focus on “what” and “how” and unjustly disregard “why”. Most of the information listed above is about “why”. It provides intent information to the next level of specification, the software requirements. Not documenting this information will result in an incomplete, non-traceable, hard-to-understand requirements specification.

      1. Information Discovered With the Aid of the Prototype

The previous sections described the kind of information a UIP contains as a requirements model. A UIP also helps discover additional information about the system that is implied by the prototype. It contains mostly facts that are not expressed directly by the UI. The information is discovered through use-cases and scenarios. Developers use the prototype to define user tasks that will be performed with the aid of the system. Parts of the definitions are expressed by the prototype functionality, and part, which is implied by the prototype, is defined using other forms of description. The prototype serve as an aid to extract functional requirements, demonstrate scenarios, and interview the customer about the portions of the scenario which are not implemented in the prototype. These definitions complete the picture of the overall usage of the system and the system’s context within its environment.

    1. How to Represent the Information a User-Interface Prototype Contains

In Sections 3.1 and 3.2, a method was defined to build a prototype and the kind of information a prototype contains was identified. Figure 11 sums up this approach graphically. It illustrates the structure of the solution proposed by this thesis as a three dimensional matrix. The first dimension represents the properties that were attributed to the system after analyzing to which domain the system belongs. The second dimension lists the kind of information a UIP contains. The third dimension represents the discovered requirements. The cells of the matrix are the system’s requirements. Some of the cells might be empty, since the prototype is not necessarily complete, and since some of the prototyped properties are not expressed by all the kinds of information. 

Figure 11: The Structure of the Proposed Solution

As expalined in Section 3.1, developers and users can go over the systems’ properties jointly in an iterative and systematic fashion. They can discuss them by means of modeling, implementation, demonstration, refinement, and validation. They can address all related issues and look for the kinds of information identified in Section 3.2 and, when they come to an agreement, state the requirements formally. The resulting process model enable developers to carry out prototyping explicitly as part of a fully traced requirements elicitation process in which it is decided and documented ahead of time what aspects, usually in the user-interface, are being modeled in the UIP. It helps developers, maintainers, and customers, know what of the prototype is intended and where to look, in the prototype or in other documents, for requirements infromation. The tracing links allow easy access to the documentation of any decision so that when one is following the trace links to track down an answer to a requirements issue, one sees the explicit decision and knows whether to consult the prototype or another document in the requirements specification suite.

This section deals with the ways to represent the requirements that are implemented in the prototype, that is, the third dimension. It is obvious that requirements that are not implemented in the prototype should be stated in a requirements document as well.

As noted above, two approaches exist.

  1. First model, then prototype4.
  2. First prototype, then model.

In both cases, a method is needed to capture the prototype’s requirements and to specify them in the requirements documentation.

Defining a method, namely one general method which is applicable to all kinds of systems with a UI, is difficult, in fact impossible, because it depends on so many significant factors. For instance,

  1. whether the prototype will be reused or thrown away.
  2. design technique and programming language. For instance, it might be more practical to use OO requirements models if the deliverable system will be designed using an OO method and implemented in C++ or SmallTalk, especially when reusing the prototype.
  3. application domain. Applications from different domains may require different modeling techniques.
  4. application framework and design pattern. For instance, if developers choose to use an application framework, which is based on the MVC design pattern, it is pointless to model application data in a form that does not adhere to this architecture.
  5. required degree of formality. Different applications require different degrees of formality. An application that controls an explosive chemical reaction will most likely require a higher degree of formality than a text editor application.
  6. available modeling tools. At Rafael, for instance, there are three standard CASE tools. The organization’s standards obligate software developers to use one of these tools for modeling requirements and top-level design. Each of these tools supports a limited set of modeling techniques, from which we can choose the most appropriate for the system under development. Therefore, we cannot choose any model we want; we can choose only the ones that are supported by the CASE tool.
  7. organizational standards. Rafael uses two standard software engineering methods, DeMarco and Yourdon’s structured analysis and design method [26], and the recently adopted, Unified Modeling Language (UML) method for OO systems [22], and [23]. Rafael also has document templates with predefined contents, which are approved by the Israeli Institute of Standards, that it has to use. It is important to represent requirements in a form that is readable and maintainable by most people in the implementing group and presentable in formal reviews. The constraints imposed by the modeling tools repeat themselves here.

However, it is possible to define the following principles for choosing modeling techniques.

  1. The principle of means-ends abstraction should be kept [19]. Combining requirements prototyping in a means-ends abstraction produces the following scheme. The level of prototyping specifies “why” and “what”, the level of requirements specification specifies “what” and the level of design will specify “how”.
  2. All prototyped aspects have to be modeled. The chosen method should have the capability of representing the aspects listed in Section 3.2. This may constitute a problem if the modeling technique is imposed by an organization’s standards. However, it is more important to avoid losing valuable information than to adhere completely to a modeling technique or to a standard. If a technique lacks essential modeling capabilities, it is recommended to broaden it.
  3. The resulting requirements specification should address all the potential customers which were identified in Section 2.2.7. It must also support a combination of formal and informal specification techniques.
  4. The prototype should be documented as a requirements model, particularly if it is not completely reused.
  5. It is acceptable to use multiple views and models to present requirements. The use of multiple views, even though redundant, expresses different perspectives of the problem domain and helps to discover inconsistencies. In fact, in OO, some distinguish between the model, i.e. the one object model, and a variety of views of that model. The existence of the model, which is basically a design entity, is critical, because in OO design, requirements issues are mixed from the start. This is not necessarily the case for other software construction approaches. In any case, developers have to keep in mind that eventually there will exist a model and that all the other models they create along the way are the main model’s abstract views.

I believe these principles have to be fulfilled when choosing a method to state requirements, in order to produce a good requirements model and avoid losing valuable information about software requirements.

It is not the objective of this thesis to recommend one specific method. Still, it is necessary to choose a modeling technique, in order to completely explain and demonstrate the proposed approach. Since I do not intend to invent new models, there being more than enough, I decided to choose one representative model for each of the aspects listed in Section 3.2 and to show how to use this model to capture what was discovered. I believe that the important issue is the concept. Appropriate models should be chosen based on the application domain, on the nature of the application to be developed, and on the factors listed above. The number and types of these models and the level of detail can be chosen after deciding if the prototype will be thrown away. Nevertheless, to demonstrate the concept, one model will be sufficient. Choosing the right model based on a comprehensive comparison of models is a subject for research in its own right.

I had doubts whether

  • to choose one method that includes models for most of these aspects and broaden it with new models if needed, or
  • to choose models from several methods.

In the course of looking for one method, I examined UML. It is a method for OO analysis and design that combines several preexisting methods. Therefore, it has plenty of models. It has models for data modeling, for application behavior, for use-case analysis, and others. There is a drawback in choosing UML, because it is a method for OO analysis and design.

The problem at the heart of this research exists regardless of the chosen modeling technique, meaning, regardless of whether the deliverable system will be constructed using OO techniques, structured techniques, or any other technique developers will find suitable. However, it seems important to mention UML because of one of its main concepts, which I find valuable in the context of the approach proposed by this thesis, namely its invitation to choose the suitable models for your system from the variety of models it offers. Moreover, I could not disregard the evident link between UI and OO.

Although it might create the impression of a mishmash, eventually, I chose to follow the second approach and to mention UML, when applicable, as a representative OO method.

The next sections list the chosen models for each of the aspects identified in Section 3.2. I chose these models because they represent properly the sort of information the prototype was helping me to discover. Other models, for instance, execution schemes, interfaces to peripherals, and timing requirements were discovered with the aid of the prototype through interviews and demonstrations. It is beyond the scope of this research to explain how to create these models. This kind of information can be found in the methodologists’ own publications and in other related work. However in each section, a list of guidelines is given. Since the prototype is a requirements model in itself, some of the guidelines are also applicable to prototype construction. These lists does not pretend do constitute a complete set of guidlines. Although some of the guidelines that are presnted appears to be platitudes, I found it worthwhile to present them because they have been tested and because they do work. They originate from the experience gained and from the lessons learned from the development of the TSG host prototype. The case study and the concrete examples, that are presented in Section 4, furthur clarify these guidelines and make them more meaningfull.

      1. Modeling Functionality and Behavior

Application functionality is probably the most important aspect of requirements specification. The term application functionality has several synonyms and closely related terms, which are all trying to express what the system does or can do. The reason for the existence of different terms is that there are different approaches to requirements modeling and design such as data-oriented, user-centered, functional-oriented, and object-oriented. All these approaches have to address UI issues somehow. The different terms express different points of view.

Several terms were used throughout this thesis in different connotations in order to discuss aspects of what the system does. “Application functionality”, “application behavior”, and “application capabilities” were used in order to talk about functional requirements. “Data processing” was used in order to talk about the flow of information in the system. “User-system interaction” was used in order to talk about functionality from the user’s perspective. “Reactive nature” was used in order to describe the system as a functional entity that reacts to events generated by a user. All these aspects were identified and discussed in Section 3.2. Now, it is necessary to model them.

We have to consider two perspectives when looking for a way to present functional requirements, the user’s perspective and the developer’s perspective. Therefore, it is necessary to use more than one modeling technique.

There are numerous ways to model functionality and particularly user-system interaction. I choose to mention one method that is based on use-cases and scenarios to model the user’s perspective and one method to model the system as a reactive, event-driven application. Scenarios are considered an effective non-formal method to describe functional requirements in a natural language, which is understandable and intuitive to users. Users tend to think in terms of scenarios. There is a very strong link between scenarios and UI prototyping, because the prototype can be used to demonstrate scenarios in a very tangible way, let the users experiment with them, validate them, and discover new ones through demonstration and interactive interviews. Describing the system as a reactive entity is very natural for systems with a UI. UI-intensive applications are usually designed as event-driven state-machines that perform operations and change states in response to users’ mouse and keyboard events; this is true for most application frameworks I have used as well. Developing a UIP facilitates the definition of the reactive behavior. The transition from the UIP to the model and back is also very natural, especially when the proper prototyping tool is chosen. The transition from a set of scenarios to a reactive state machine is also natural. Scenarios can be divided to basic users’ actions and events generated by the users, and to observed system reactions. These events and observed reaction are represented by the state machine as well. Scenarios are the general non-formal description of the user-system interaction and the operation of the system within its UoD. While the state machine is a formal model of only the software functional aspects, there might be portions of the scenarios which are not expressed by this functional model. Scenarios also serve as a means to test the state machine’s operation.

There is a drawback in focusing on modeling functional properties of the system. Perceived functionality and consequently functional requirements tend to change frequently. This change may result in frequent changes of requirements that will make prototyping difficult and the consequent design less robust. This drawback is ameliorated in two ways.

  1. Requirements changes are at the heart of prototyping. Requirements are discovered through gradual refinement of prototype properties including functionality. Requirements prototyping leads to more correct functional requirements. Therefore, the design will be founded on a correct basis. In any case, it is better to experience these frequent changes when it is significantly cheaper to fix.
  2. The prototype construction method described in Section 3.1 is based on reusing requirements from the domains with which the system shares properties. This method helps attain a more general comprehensive functional requirements definition from the start instead of later in the project and in small chunks.

The use of scenarios, even though very widespread, also has some drawbacks. Scenarios are often described not formally enough or too ambiguously. Leite’s work [37] and [38] shows that it can be otherwise. In addition, the use of another more formal model in conjunction with scenarios overcomes this problem by providing more than one viewpoint of system functionality.

From all the existing scenario-based methods, I choose to mention the following,

  • the work of Leite, due to his extensive research on the subject of lexicons and scenarios, research that has identified and explored the importance of lexicons and their implication on software requirements, and
  • the work of Jacobson [28], who chose to base his OO method on use-case analysis; this work was also incorporated into UML.

To model the system as a reactive entity, I choose a concurrent state machine and a timing model. Both can be created with the guidance of Harel’s method, StateMate [29] and [30], and its extension for OO modeling, which is also incorporated into UML. I have found this method to be a very powerful means to describe large complex UI-intensive applications.

The following table summarizes the models offered by these methods that are used to model functionality.

Table 2: A Summary of Chosen Functional Models


  Chosen models UML Chosen models
User-system interaction Scenarios described in structured lexicon-based language Use-case diagrams

Scenarios

Sequence diagrams (interaction diagrams)

Reactive behavior State charts State diagrams
 

If a representation method that offers a higher degree of formality and verifiability is required, it is possible to use more formal methods, after discovering the correct requirements with the aid of the prototype. Such methods include the formal relation-based method described in [31], the method for model checking described in [32], and the method for mapping user requirements to implementations described in [5].

        1. Guidelines for Creating Functional Models

A modeling technique that supports automatic or semi-automatic prototype generation is chosen. The automatic construction forms the link between the models and the prototype. Otherwise, it is recommend to follow the following guidelines,

  1. It is advisable to reuse requirements from the domains to which the system belongs, instead of restating them in a different manner or, even worse, reinventing them.
  2. Do not disregard good human engineering considerations. Instead, incorporate them into the design from the very start. Human engineering, on one hand, is a source for many functional requirements that are not expressed explicitly in users’ requirements documents and, on the other hand, can significantly simplify functional requirements and prevent the implementation of complex functional attributes that will be eventually thrown away or will degrade the system’s usability.
  3. Start the requirements specification from the UoD.
  4. Identify the main actors, a.k.a. agents [33] and stakeholders, that is, people as well as other systems that will interact with the SUD.
  5. Define the interaction between the SUD and other systems, and try to emulate this interaction even if the interfacing systems do not exist yet.
  6. Define each actor’s roles.
  7. Identify the main capabilities of the system, which can also be defined as the main use-cases of the system.
  8. Look for all the possible ways the system can serve an actor. Make sure that each of these ways has a corresponding use-case. For each way identified, generate the scenarios that describe normal operation and erroneous operations, if they exist.
  9. A good place to start the UI design is the application’s main menu. Define the main menu and the sub-menu items. For each item, define the operation it initiates. From there, follow possible consequent occurrences, and define consequent operations.
  10. For each operation, specify who performs it, its purpose, what should happen, main effects and side effects, constraints, possible errors and exceptions, and error handling.
  11. Identify the main sources for events, namely, event generators such as the keyboard, the mouse, operating system sources, external sources, UI sources such as menus, toolbars, pushbuttons, and all kinds of edited controls, amounting to hundreds of UI events.
  12. Categorize the events by internal or external and by source, type, data arguments, and constraints. For each event, define either by text or by implementation the required handling, functional constraints, pre-conditions, operation main effects, and operation side effects. If the event handling is not trivial or implied by the execution environment, e.g., the response to the “Cancel” push-button in a dialog, then give its specific response.
  13. Define the required background processing and continuous processing, namely what the system does while it waits for user events. A reactive model does not model continuous processing very well, because it is driven by events that occur at specific points in time.
  14. Identify the main operational modes of the system. Define the set of allowable user operations, for each mode. Distinguish between program internal modes and visible modes. Give a special attention to the issue of view modality. Modeless views, although sometimes needed, complicate the design of state machines.
  15. The UIP should implement the modes, the defined user operations, and the corresponding system reaction. The prototype’s UI should enable developers to demonstrate the most important, if not all, of the scenarios that were discovered and their defined functionality.
  16. The design of the prototype, although based on the framework supplied by the chosen prototyping tool, should match the state machine and vice versa. All the modes, internal and external states, events, event handling, constraints, background processing, operation main effects, and operation side effects have to be represented in the prototype and in the model. From the state machine perspective, the UI is a source of events and a destination for observable reactions and visible mode changes.
  17. Use the defined scenarios to check the other functional models created.
  18. Use words from the lexicon as much as possible in the scenario description, the UI, the requirements models, and the requirements statements.
      1. Data and Data Processing Modeling

There are several data-oriented requirements specification methods that offer ways to model data. Data-oriented methods focus on data processing and the data to be processed. The UI is regarded a source and a destination of data that are sometimes assimilated into other interfaces. The specification of data processing is considered the functional specification of the system. This approach is suitable primarily for information systems.

Many types of systems share properties of an information system without being classified as one. Most systems have data and data-related issues that have to be specified even if they are not the primary issues of specification. The difference from information systems is that the main focus is not on data. As explained in Section 3.2.2, the UIP helps discover system data through its views. The user’s perspective is represented by the UIP in a visual way that is far more superior to any paper model, but this is usually only a partial view, which contains only details that are of interest to the users. Developers need a more comprehensive model to represent the internal system data model, a model that contains many details that are hidden from the users.

Data processing specifications are also discussed in this section, although they are considered a kind of functional specification. It just seems more natural to discuss them here.

For model data and data processing, I choose to mention DeMarco’s method [26], which is a well known data-oriented approach, and Rumbaugh’s Object Modeling Technique (OMT) [27], which has been incorporated into UML.

The following table summarizes the models offered by these methods that are used to model functionality.

Table 3: A Summary of Chosen Data Models


  Chosen models UML chosen models
Data model Entity relation diagram (ERD) and a data dictionary Class diagram
Data processing Context diagram and data flow diagrams Collaboration diagrams

In OO, the data model and data processing are expressed by the same models due to the principle of encapsulation, which binds the data to the functional specification.

        1. Guidelines for Creating Data Models

Many popular RAD tools that are suitable for UIRP are basically information system RAD tools or client-server RAD tools. Such tools offer aids for defining and implementing data models, which are based on a known paradigm such as relational databases, flat files, or table-based databases. The prototype design is based on the prototyping tool. As for functional models, the problem lies in the need to coordinate between a specific tool and a general modeling technique. The choice of a proper tool will minimize this problem. In addition, even if a suitable tool is not found, there are sometimes ways to emulate one modeling technique by another. For instance, HyperCard  can be used to model objects [1].

If the prototyping tool offers a visual means to design the data model, it is recommended to use the visual models it creates. Otherwise, the following guidelines are recommended.

  1. Try to separate the model from its views.
  2. Distinguish between internal data representation and visual representation.
  3. Identify the main sources and destinations of data.
  4. Start with the UI. The UI is a source and a destination of data.
  5. Track the flow of information through the system.
  6. For each element which is displayed to or edited by the user, specify its type, structure, source, destination, and flow within the system from source to destination. If the internal representation is different from the visible one, specify the required transformation and the processing the data undergo while they are flowing through the system.
  7. Beware of duplicates, i.e. data that are displayed in more than one view or form.
  8. Verify with the users, the exact form they are accustomed to viewing and entering data, including data format, engineering units, and allowed values.
  9. Define data integrity checks, consistency rules, and I/O masks.
  10. Look for all the possible ways users are accustomed to viewing data, including various screen displays and printed reports. Make sure that the data required to generate these views are represented in the model and that all the required data processing algorithms are defined, for instance sorting or translating from one representation to another.
  11. Do not forget persistent data. Define the way these data will be stored and retrieved, the storage format, whether more than one format will be supported, and the way changes in these data will be handled in the future. For instance, whenever a data component is added to or removed from a data structure that is stored in a file, there is the question of whether the system will detect the change automatically when reading the data and handle it without user intervention or whether the system will only generate an exception, and if the user needs data that were stored prior to the change, he or she will have to perform some sort of conversion. This problem is common to many applications that are offering downward compatibility to previous versions. Both options require defining a mechanism to handle revisions.
  12. Take into account human engineering considerations. The system is required to provide flexible means to enter data in order to achieve user friendliness. This need may result in additional functional requirements, specifically additional data processing and additional data verification checks.
  13. Combine the definition of the required data related functionality with the functional model. Identify the events that will trigger data processing operations. Define where exactly this processing will be performed, that is, in which state and as a response to which events. Bind the data to these events.
      1. Modeling Application Taxonomy

The application-related taxonomy is disregarded by all the methods used by our group. The importance of the taxonomy was revealed to us very late in the project, actually too late. Part of the reasons for disregarding the taxonomy were

  • not giving the taxonomy the proper amount of attention and
  • the lack of support by the method and the CASE tools we were using.

Sometimes, part of the application’s taxonomy is found in a section of the requirements document called “Glossary of Terms and Abbreviations”, and part is found in the data dictionary, if one exists. The terms of the taxonomy are spread all over the UIP. The primary uses of the taxonomy are to communicate with the users using a well understood set of terms and to describe system functionality in terms of user-system interaction. Since user-system interaction is performed through the system’s UI, it is natural to find these terms all over the UI. UIRP helps to discover new terms and to validate terms. For modeling, there is only one perspective, which is common to users and developers. Developers and users have to agree on the set of terms, because these terms form the base of the language they use to discuss requirements.

For the application’s taxonomy, I mention the work of Leite [35], [36], [37], [38], and [39]. Although it seems that his work is related to information systems, his findings are applicable to other systems as well. Leite chose a Language Extended Lexicon (LEL) as model. A LEL is a representation of the symbols in the language used to describe a problem.

Two comments are necessary, Leite deals mostly with a textual lexicon. In reality, there are additional kinds of items a taxonomy includes, namely, audio-visual items. These can be described textually, but their original form of representation is more effective. The use of an audio-visual taxonomy is a source for requirements. Requirements to support sounds and to represent the application state graphically, for instance, icons, tool bars, progress indicators, little check-marks near menu items, grayed menu items, hazard indicators, etc. Leite chose to use hypertext to represent a lexicon. This approach integrates well with the model for representing general knowledge about the system that is described in the next section.

UML does not include a corresponding model. In my opinion, it requires an extension.

        1. Guidelines for Creating a Lexicon

The sources of terms of the application taxonomy consist were identified in Section 3.2.3. Additional heuristics for building a lexicon can be found in [36]. In addition to the heuristics defined there, the following guidelines are recommended.

  1. Even when not exploiting the benefits of a lexicon to the full extent, it is recommended to create at a lexicon or a dictionary that includes the definition of the terms, known synonyms, related terms, and list of visual and audio conventions. Establishing such a lexicon contributes to good communication with the customer, it prevents unnecessary misunderstandings, and it improves requirements readability.
  2. Remain as consistent as possible, particularly when designing the UI. Try to avoid duplicates and ambiguities.
  3. Avoid inventing new terms when possible. Instead maximize the use of terms from the users’ work environment, or from the system’s operational environments, for instance, terms which are common to windows-based applications. Consistency and compatibility to the users’ work environment leads to improved system learnability.
  4. Maximize the use of terms from the lexicon in the system’s UI, and in the requirements description.
  5. Combine the lexicon with the functional models and the data models. Use terms from the lexicon to name states, commands, modes, events, data entities, and operations.

The use of application specific terms, sometimes undermines abstraction, because the terms are too specific. Reusing terms reduces this effect. In addition, developers have to beware of this effect and look for abstractions in subsequent design phases.

      1. Modeling Other System Interfaces

The use of a context diagram (See Section 3.3.1) or a use-case diagram (See Section 3.3.2) covers the modeling of interfaces with other systems. The functional model covers the functional aspects of the interface, and the data model covers the data flow aspects of the interface. Anyway, it is considered good practice to start the requirements definition with a description of the system’s UoD. Therefore, it is recommended to offer a general description when the diagram is presented and to discuss the finer, more formal details in conjunction with the functional model and the data model or to describe it in a separate appendix of the requirements specification document. Sometimes, it is necessary to describe system interfaces in a specific document, such as an Interface Control Document (ICD). This usually happens when using a standard interface, when the interface is dictated by the overall system design, or when the interface is defined by preexisting interfacing systems. When this is the case, the interface document serves as a source for requirements that the system has to satisfy in order to maintain the interface. The UIP can help to complete the interface definitions if the interfaces are partially defined and to emulate the interface if the interfacing systems do not exist during prototyping.

      1. Modeling General Knowledge About the System

The importance of preserving the knowledge gained in the course of prototyping is explained in Section 3.2.5. This kind of information provides the dimension of intent to the requirements definition. Like the taxonomy, this aspect is disregarded by all the methods used by our group, even though the information can be found everywhere during the requirements analysis and specification phases. Part of the information is found in the general description portion of the requirements documents, part is found in transcripts which are produced during work sessions, part is found in the documents produced by human engineering personnel, part is found in people’s heads, and part is expected to be found in overall system documents. I used the words “expected to be found” instead of “is found” because system-level specifications are not that different from software specifications. In some cases, they are produced by the same people.

The tendency to specify “what” and “how” and ignore “why” is not unique to software engineers. This kind of information will be probably lost over time unless it is concentrated somewhere. The main consumers of this information are engineers joining the group during the system’s development and software maintenance personnel. It enables them to better understand the system with a depth that is hard to achieve without such knowledge. This kind of information does not contain requirements. It documents and explains other requirements models, the prototype, and eventually the deliverable system. The prototype, as a requirements model, needs a complementary description. Otherwise, the requirements will be understood only through reverse engineering of the prototype, which is quite a difficult task.

What seems to be needed here is a knowledge base that will contain this knowledge. The structure of a context-sensitive help system can be used as a model for organizing this knowledge. This model supports a hierarchical structure with cross-references and links between items. This solution is suitable for systems for which a user exists. An appropriate model for this aspect is hypertext. It is a reasonable choice because this is a common model for context-sensitive help systems, and in fact for the biggest knowledge base there is, the World-Wide Web.

Writing a draft version of the user’s manual or creating context-sensitive help for the prototype is a practical approach as well. It saves work, it can be reused when creating the delivered system’s documentation, and it can be done any time during the development process with the right support from the tools used.

Moreover, text and charts within general system models are not the only way to describe requirements. Other means such as video, sound, and pictures can be used to better explain and even demonstrate requirements. Users are accustomed to help systems, tutorials, and manuals. These are the primary means, other than courses, to introduce a system to users, to help them understand it and the way it can serve them. I chose this approach based on the experience gained in the TSG project. Whenever I had to present the system, including the design reviews, I always used the prototype to clarify or demonstrate what I was talking about. It was as if I was quoting from the system's user's manual of a non-existent system, or conducting a user's course. After the debate with the QA people at the ISO 9000.3 review, I came to the conclusion that what was missing was a draft of the user's manual or context-sensitive help for the prototype. The OAD and another document that describes the UI, which were written by human engineering personnel, did not contain all the information. These documents focused on the user's perspective and not on the developer's and the software system's perspectives.

        1. Guidelines for Modeling General Knowledge

The kind of information that should be included in the general system knowledge base is listed in Section 3.2.5. Writing the text for a help system, or the text of a user’s manual is a job for a good technical writer or an experienced HCI engineer. The main guideline for modeling general knowledge is to document “why” knowledge that is essential for understanding the system. The use of hypertext can serve the goal of creating a lexicon as well.

The general system knowledge base should also identify the information that the UIP does not contain namely,

  • concepts that have been left out of the UIP either by lack of knowledge or by decision,
  • artifacts of RP, code in the UIP that is there to make it a prototype, that is not to be reused, and
  • information that was intentionally hidden from the user.
    1. Conclusions

Sections [3.1], [3.2], and [3.3] introduced the approach proposed by this thesis. The method is based on

  • the characteristics of UIRP, which were identified in Section 2,
  • guidelines for a systematic approach to prototype construction,
  • identifying the kind of information a UIP contains, and
  • suggesting a way to represent this information, which is based on the prototype construction method, the kind of information identified, and existing representation methods.

The proposed approach gives advise on what to do once the decision to prototype is made so that developers, maintainers, and customers know what of the prototype is intended and where to look, in the prototype or in other documents, for answers about their questions regrading requirements. This approach attempts to deal with the every-day prototyping reality, the approach to prototype construction, the distinction between the information the prototype contains and the modeling method, and the use of the prototype as a requirements elicitation aid and as a requirements model itself. It is also suggested that the amount of knowledge needed to start the prototyping process is small. Unlike in other methods [7], in the beginning there is no requirement for formal models, the developers can start with what they have, which is sometimes very little or very wrong and continue from there. Some suggest that one can automatically or semi-automatically create a prototype from a complete model. The proposed approach takes a different point of view. A complete model can be created from a good prototype. If the modeling technique is adequate for automatic prototype generation, then the model description language is another high-level programming language. Formal or semi-formal models can be produced in the course of the process after obtaining sufficient correct information. The number and types of these models and the level of detail can be decided after deciding if the prototype will be thrown away. What is left to do are to explain the origins of this approach and to demonstrate the reasoning behind it. The next section describes the prototyping experience gained from developing the TSG host UIP.

  1. The Target Scene Generator Case Study

In the previous sections, an approach was suggested for dealing with the problem introduced in Section 1.2. The approach involves forming a generalized concept that is based on an overall view of UIRP.

This section presents the chosen case study, the TSG system [40] and [41]. My work on this system led to my addressing the requirements engineering problem being considered in this thesis. The work on this system is the origin of some of the concepts that were introduced in the previous sections. Consequently, the description of the case study focuses on the more general issues of the TSG system requirements analysis process rather than on system-specific issues.

Although systems like the TSG are not that common, the TSG possesses many properties of more common systems. Therefore, it is a source for many good typical examples. Much can be learned from the experience gained during its development. The project, which has already generated more than 200,000 lines of code, is too big to be covered completely by this thesis. Therefore only portions of it will be introduced.

It is important to note that the concepts introduced by this thesis were formed long after the requirements analysis and requirements specification phase of the project were finished. Therefore, some of the methods suggested here were not fully utilized. In fact, this thesis is trying to improve on techniques that were implemented from gut feelings or from partial past experience. It is trying to learn from the experience gained and from errors that were made, and to suggest solutions to problems encountered in the course of the project development.

In the following sections,

  • a short description of the TSG system is given,
  • some cases from the TSG system requirements analysis process are presented that demonstrate the origin of the concepts on which this thesis is based, and
  • the usage of the approach, introduced in Section 3 is illustrated by applying it to an example from the project.
    1. The Target Scene Generator System

The TSG system is a physical effect simulator that generates high fidelity, multi-object IR images. An IR image generated by the system is composed of several objects that represent targets, decoys, and a background. The system allows independent control of appearance and location of these objects at any position within a large field of view. IR images are generated by continuously changing the radiometric and dynamic characteristics of the simulated objects in a controlled fashion. The primary purpose of the system is to serve as a research aid for testing and evaluating the performance of electro-optical missile seekers.

The meaning of the term “physical effect simulator” is that the system actually generates IR radiation and projects it on the focal plane of the seeker under test, in order to simulate IR images. Consequently, the system consists of hardware, firmware, and software components. All collaborate in order to generate radiation in a controlled manner and to project it.

The main type of components of the system are

  • optical elements, both dynamic and static,
  • various types of IR radiation sources,
  • several types of electro-mechanical servo-controlled units,
  • servo-control systems,
  • a computer system consisting of several computers,
    • servo controllers,
    • radiation source controllers,
    • special-purpose embedded systems, and
    • standard PC-based workstations, and
  • the software on the various computers.

The development of the TSG system was a multi-disciplinary effort conducted by physicists, mechanical engineers, servo-control engineers, hardware engineers, and software engineers at RAFAEL.

A system-level analysis and design process preceded the software development process. Some of the products of this process were

  • the conceptual computer system structures depicted in Figure 12,
  • a set of system-level requirements, some of which deal with software, and
  • some system-level design principles and fundamental decisions that significantly influenced the software requirements and formed the basis for the software requirements analysis and specification.

These principles include the following:

  • The entire system operation is controlled by computers.
  • The system supports manual as well as automated operation of all the controllable elements.
  • No hardware control devices such as knobs, buttons, potentiometers, rotary switches, etc. are used to manually control any of the components.
  • UI software controls replace hardware controls and emulate their operation.
  • All the controllable elements support local control as well as remote control.
  • Local control is used for preliminary integration and maintenance purposes only, mostly trouble shooting. Everyday operation is done using remote control.
  • Automated operation occurs at real-time at a rate of 200Hz.
  • The system is operated from a single computer, called the host computer, with the aid of a keyboard and a mouse.
  • The MMI for computers other than the host computer supports only local control. Local control supports the basic set of operations required to carry out maintenance.
  • Standard off-the-shelf technologies, including computers, operating systems, development and execution environment, hardware, and GUI systems, are used as much as possible.
  • A modern graphics-based MMI system, such as Windows™ and Motif™, is used.

In order to explain some of the key terms mentioned above, without getting into technical details, the analogy of a Video Cassette Recorder (VCR) can be used. Local control can be compared to the VCR’s front panel. The front panel of a VCR enables the user to perform basic VCR operations. Remote control can be compared to the VCR’s remote control. Remote control enables the user to perform most of the operations supported by local control and much more. The meaning of “manual operation” is that each operation is done manually by activating the remote control keys. The meaning of “automated operation” is that the user can define a sequence of operations that the VCR will perform automatically, for instance, a time-triggered sequence of recordings. The users program a sequence of dates, times, channels, and duration, and the VCR is turned on automatically at the specified times, performs the recording from the selected source, and then turns itself off when done. “No hardware controls” means that a computer replaces the remote control. Everything is done from a computer by means of a keyboard and a mouse with the aid of a GUI system.

The TSG system can be compared to a recording studio, with several VCRs and lots of other related equipment, all of which are operated by a technician from a single computer-based post, as opposed to having a pile of remote controls as do most of us. 

Figure 12: The Principal Structure of the TSG Computer System

Figure 12 and the principles listed above explain why most of the system’s MMI is concentrated in the host computer. The host computer is supposed to serve the system operator and support the operation of the entire system from a single post by means of a keyboard and a mouse. Consequently, the host computer is a UI-intensive system and is, therefore, a potential subject for UIRP.

    1. The Host Computer User Interface Prototype

We experienced many difficulties during the early stages of the software requirements specification for the host computer. In the beginning, the requirements elicitation did not seem to converge. The main reasons for this were that:

  1. We had never developed a similar system in the past. Therefore, from the beginning, it seemed that we did not have sufficient knowledge about the application domain. What we were about to develop was something entirely new that we had never done in the past.
  2. To the best of our knowledge, a similar system of such complexity had not been developed anywhere in the world. Therefore, there were limited sources of information we could learn from. What we did learn was that similar systems of lesser complexity experienced many difficulties due mostly to frequent requirements changes during their development, and suffered from childhood maladies during the first years of their operation. Therefore, we could expect a similar fate unless we could find an approach that addressed these issues.
  3. The system combines problems from various disciplines, requiring multidisciplinary solutions. Consequently, the context of the software requirements was complex. It was necessary to fully understand this context in order to complete the requirements specification and in order to understand the resulting requirements statements.
  4. The customers had difficulties defining their needs and requirements because the system was supposed to completely change their work methods and supply them with a new set of tools and aids that they had never used before. The primary customers needs were future needs. As opposed to other cases, in which we had to supply solutions to existing needs or improve existing solutions, in this case we had to invent new needs and to foresee future needs. This task became even more complex due to the fact that the system is supposed to last 15 years. What we could learn from the current work environment and work procedure was limited and partial. Some of it was inapplicable because we intended to change it. In Leite’s [36] terminology, we had to define from scratch major portions of the system's UoD.
  5. The traditional methods used by our group for requirements elicitation were not suitable for this system and did not address all needed aspects of the system. The software engineering methods used by our group focused primarily on requirements modeling. Techniques for requirements elicitation were almost disregarded.
  6. The customers were not familiar with any requirements modeling techniques including the ones we used, data-flow diagrams, state machines, object models, ERDs, etc. The fact created a serious communication problem with the customers. Consequently, we could hardly validate requirements. Even when the models we created seemed to be complete after they passed all the consistency checks performed by the CASE tools we used, we were not convinced that they represented what was truly required. Needless to say, not being convinced that the requirements specification truly represents the customer needs puts a lot of pressure on the development team, especially in a fixed-budget, tight-schedule project.
  7. Since we were about to develop a simulation system that can generate IR scenes, it was clear that the primary purpose of the system was to enable an operator to define, execute, and analyze IR scenarios. The difficulties were to define an IR scenario and to define the language used to describe the IR scenarios. The first issue is data related and the second issue is lexicon related. Lexicon-related issues were not addressed by the methods used by our group. We felt that the key to solving these problems was the IR scenario definition language. The analogy of a programmer trying to work with text files can explain this matter. In order to open a text file, read a few lines, print them, and afterwards close the file, a programmer does not have to be aware of the operating system, the actual way data are stored on disk, the disk device driver, the fact it is a 1.44 Mbyte, 16 cylinder, 9 sectors-per-track disk, that rotates at a velocity of 100 revolutions per minute, and many other details. Equally so, in order to define the motion of a simulated target, the operator does not have to be aware of the operating system, the way the motion will be generated, the servo-control system, and many other details. Keeping track of these details is the job of the system. Needed was an intuitive, easy-to-use, and preferably, visual language that enables users to define IR scenarios and hide all the technical details and a system that understands the language and can make these definitions happen. The language had to consist of natural terms that are well understood by the system user, much the same way as “open”, “read”, “write”, “print”, and “close” are understood by programmers. Furthermore, it had to be the users’ language, the language they utilize in order to think about and plan IR images, i.e., the language they use when they wake up in the morning and immediately start thinking about all the wonderful things they will do with the aid of the system when they finally get to work. This language had to have expressive power sufficient to exploit all the system’s hardware capabilities.
  8. It was clear to us very early that immediately after the first version of the system will have been delivered and the users will have started to use it, additional user requirements will be presented to us. The TSG, thus, has all the elements of a classic E-type system [43]. Therefore, it was essential to define a solution that provides a good basis for future enhancements.
  9. The sequential approach to software development we were trying to follow did not seem suitable. After a sequence of frustrating attempts to complete the requirements specification and to reach a state in which requirements can be baselined, we realized that the development of an innovative, complex system requires an evolutionary approach. It was impossible for the users as well as for the developers to grasp all at once the required complexity down to the level of the smallest details.

Since we experienced these difficulties during the early stages of the software requirements specification, I decided to rapidly develop a throwaway UIP. This UIP was to improve the communication with the customer and to help complete requirements specification within a reasonable amount of time. Software engineers, human engineering people, and users group representatives were involved in the development of the prototype.

In parallel with requirements analysis and specification, we were looking for an execution environment for the host computer, namely a combination of a computer, an operating system, and a set of software development tools that satisfy the requirements for real-time operation as well as the system design goals. As a result, the following limitation were imposed on the process of prototype construction.

  • It was inadvisable to develop an evolutionary prototype, because the target system was not determined yet.
  • We could not commit to specific UI design in terms of appearance. We had to concentrate on conceptual issues of the UI and the underlying model, and leave the final details to later phases.
  • At the start, we could not choose a formal modeling technique because we realized that the choice of a development environment affects this choice. Thus, we had to start with abstract non-formal techniques.

Eventually, after testing a few tools, we chose Microsoft Access™ to build the host computer’s UIP. We chose it because one of the team members had used it in the past. In the end, it turned out to be a good decision, because finally, the analysis began to advance well. On one hand, we finally had good communication with our intended users, and on the other hand it was relatively easy and quick to continually change and enhance the prototype. Microsoft Access turned out to be a good prototyping tool, in spite of the fact that it is a tool for creating database applications and we were prototyping the UI for a real-time simulation system. It is a powerful aid for prototyping MMI and functional behavior and for data modeling. It has a relational database, an effective GUI builder, and a powerful but simple programming language to enhance the basic functionality and to create event-driven applications. It is very convenient to use these capabilities. For example, the database is defined by graphical means, The displays end data-entry forms are built interactively and use a query language to read from and write to the database. Because it is a tool for database building, there is a clear separation between the data and their views. This separation is very important, because it helps make the picture clear. These capabilities are not unique to Microsoft Access. Other good RAD tools possess them as well. The point is that having a tool with such capabilities assisted in getting useful information from the users.

There was nothing unique in the process of prototyping itself. It was conducted in the manner recommended in many publications about prototype-oriented software development, and eventually the prototype was thrown away. The unique features of this prototype were

  • the context in which it was constructed,
  • the conditions that emphasized the need for a UIP,
  • the benefits that were gained from UIRP,
  • the experience gained in the course of prototyping,
  • the problems we encountered, and
  • some of the lessons we learned.
    1. An Example of the Usage of the Method

The example, given in this section, demonstrates the use of the method introduced in Section 3. It is a typical example that represents a pattern that repeated itself for nearly all of the system properties that were prototyped. The next section will discusses the general lessons learned from the development of the host computer UIP.

The following sections demonstrate

  • how the construction of the IR scenario related portions of the prototype can be made systematic,
  • how requirements are presented by the prototype,
  • how requirements are extracted from the prototype, and
  • the way the prototype relates to other requirements models.
      1. Forming a Systematic Approach to the Host Computer Prototype Construction

The first step is to define the TSG host computer’s operational environment. The operational environment of the TSG host computer is illustrated in Figure 12. The context-diagram illustrated in Figure 13 shows the IR scenario related TSG host interfaces. The interfaces from the users to the control system and to other software tools are primary interfaces that make it possible for the user to define, execute, and analyze IR scenarios. Thus, the UIP helps define these interfaces. In fact, the interface to software tools for generating IR scenarios, and the interface to special-purpose data-analysis tools initially did not exist in the original customer documentation. They were discovered, specified, and agreed upon with the aid of the UIP. The data elements that are at either end of an arrow representing a data-flow in Figure 13 were supposed to be specified solely with the aid of the prototype. 

Figure 13: The TSG Host Context-Diagram

From the IR scenario related requirements, we identified one kind of user, the researcher who uses the system to test and evaluate the performance of the Unit Under Test (UUT). As a result of what we learned from developing the UIP, we decided that the TSG would enable this kind of user to perform the following tasks:

  • control the system hardware setup,
  • define IR scenarios,
  • import IR scenario definitions prepared with the aid of other simulation software,
  • maintain a library of reusable IR scenario definitions and IR scenario components definitions,
  • check IR scenario definition validity,
  • execute IR scenarios,
  • acquire UUT signals and control system feedback,
  • analyze IR scenario definition and acquired data,
  • manually control all the controllable elements,
  • export data acquired by the system to other data analysis tools, and
  • perform basic trouble-shooting operations in case of a hardware malfunction.

Each of these tasks is represented by one or more scenarios. Their combination constitutes a preliminary list of the capabilities related to the system’s users. This list makes the abstract top-most view of the TSG’s requirements.

The second step is to identify the application domains with which the TSG host shares attributes. The following domains were identified:

  • simulation systems,
  • information systems,
  • data acquisition systems,
  • motion and instrument control systems,
  • real-time simulation system, and
  • UI-intensive systems.

By characterizing properties of these domains, the following attributes were found to be applicable to the TSG system. 


Simulation system
  • IR scenario generation
  • IR scenario analysis
  • IR scenario execution
  • IR scenario execution monitoring and recording
  • Closed-loop simulation5.
Information system
  • Maintaining a library of reusable IR scenario definitions
  • Maintaining a library of reusable IR scenario components
  • Managing a database of complete simulations
  • Performing data base basic operations such as storing and retrieving records, generating reports
  • I/O masking
  • Extensive algorithms for data validation and consistency checking
  • Form-driven data definition.
Data acquisition
  • Sampling and storing data from various sources
  • Supporting a set of tools for data analysis which includes mostly on-line and off-line plotting, and table generation
  • Supporting the capability to export sampled data to other applications for further processing and analysis
  • Data storage to disk
  • Synchronization of data recorded from several sources.
Motion and Instrument control
  • Manual and automated control of all elements remotely through communication interfaces
  • Continuous monitoring of all elements
  • Error reporting and error recovery capabilities
UI-intensive
  • User-system interaction conducted by means of keyboard and mouse
  • A windows-based application which adhere to the principles of GUI under MS-Windows using menus, dialog boxes, forms, tool-bars, etc.
  • Reactive application driven by user keyboard and mouse events.
Real-time simulation
  • Closed-loop simulation at a rate of 200Hz
  • Use of a time base common to all sub-systems
  • Synchronization to a central real-time clock.
System-specific properties
  • Hardware setup control
  • System interfaces
 

This list of attributes found to be applicable to the TSG system leads to the TSG host UIP development process model depicted in Figure 14. The symbol “” marks the attributes that were prototyped. The requirements related to the remaining attributes were elicited with the aid of the UIP. 
 

Figure 14: The TSG Host User Interface Prototype Development Process

The process model illustrated in Figure 14 does not add to the information already revealed by the list of properties attributed to the system. Rather, it further clarifies the TSG host’s UIP-oriented requirements elicitation process. As explained in Section 3.1, such a process can be tailored for many systems, just as it was done for the TSG host.

Each of the properties listed above is related to a complete set of questions that has to be asked. They constitute a structured list of requirements that have to be elicited and stated. They represent general issues that should be discussed and that will eventually lead to the specific system requirements, for instance, “What is an IR scenario?”, “What are the primary IR scenario components?”, “How should the behavior of an IR scenario component be defined?”, “Which data should be monitored and recorded in order to monitor scenario execution?”, and many others.

The point is that these questions can be grouped and organized according to the kinds of information that the developers want to discover, which is also the kind of information the UIP will contain. This information includes:

Intent - Why is this needed? What are the main concepts on which the requirements will be based?

Functionality - What functionality does the user expects the system to provide? How does the user expect the system to react? What does the system have to do in order to make the user requests happen?

Application data - What data have to be provided by the user in order to fully define an IR scenario? How does the system validate the data and what does the system do with these definitions?

Taxonomy - Which terms does the user employ in order to interact with the system? Which language is used to describe user-system interaction? Which terms are used to describe IR scenarios?

System interfaces - How does the user expect to interact with the system? What is required from the system interface in order to make happen what the user requested? What kinds of displays does the user expect the system to provide?

Developers and users can go over this list of properties jointly in a systematic fashion. They can discuss them by means of modeling, implementation, demonstration, refinement, and validation. They can address all related issues and, when they come to an agreement, state the requirements formally, just as illustrated in Figure 11 and Figure 14.

The list of user tasks and the corresponding scenarios form a unique view of the required functionality, the view of the system in use. They bind the properties attributed to the system to an operational scheme that explains for what the system is good, why these capabilities are needed, and how these tasks will be performed with the aid of the system. The structured list of requirements resembles a partial table of contents of the system’s requirements document, and the basis and the origin to which requirements will be traced. Requirements information about issues that are not included in this list has to be sought elsewhere. The list organizes requirements according to topics. It offers the same view of the system requirements as the view offered by a user’s manual. This view is intuitive to users who are trying to understand the system. On the other hand, formal requirements documents that are produced according to some kind of standard usually group requirements according to models. This form may be more readable for professionals than for non-professionals. The resulting process model enable developers to carry out prototyping explicitly as part of a fully traced requirements elicitation process in which it is decided and documented ahead of time what aspects, usually in the user-interface, are being modeled in the UIP. The tracing links allow easy access to the documentation of any decision so that in the future, when one is following the trace links to track down an answer to a requirements issue, one sees the explicit decision and knows whether to consult the prototype or another document in the requirements specification suite.

      1. The Infrared Scenario Definition Requirements Models

The first step of prototyping was to decide where to start. The main idea was to pick a central issue and evolve from it. It was expected that the discovery of less central issues will follow. The main goal of the host computer’s UI was to enable the system’s user to define, execute, and analyze IR scenarios. Thus, IR scenario definition seemed like a good starting point for prototyping. The project’s RFP and, therefore, each response to the RFP included the following requirement: “The system will enable an operator to define IR scenarios, execute them and analyze the results of the executed IR scenarios”. This sentence is typical of sentences that can be found in many customers’ requirements documents. It asks for a lot but says very little. The goal of prototyping was to specify these vague requirements to the smallest details. Thus, we had to specify a way for the user to perform these tasks which involve

  • defining a step-by-step process,
  • saying which options will be available and which operations will be allowed in each step,
  • defining the dialog boxes, menus, toolbars, and feedback displays,
  • deciding whether
    • to implement a modal dialog, which simplifies the design because the application will have a simple state machine or
    • to implement a modeless dialog, which considerably complicates the design and implementation because the application will have a parallel state machine in which the system can be in more that one state simultaneously,
  • choosing which of these options is more appropriate for the users, etc.

It all started from defining the use-cases. Moreover, we had to say

  • what is an IR scenario definition,
  • what does the user define,
  • what elements an IR scenario is composed of,
  • whether the definition for each element is time dependent,
  • what is the element data type,
  • which values are legal
  • which values are not legal,
  • how to check the validity of the definition,
  • what processing has to be done in order to translate this definition to commands understood by the servo-control system.

The following scenarios and sub-scenarios explain the IR scenario definition and IR scenario validation process. The description is given in the form presented by Leite in [39]. The bold-faced words represent terms from the lexicon, and Italicized words represent sub-scenario names.

        1. The Infrared Scenario Definition Use-Cases Description
TITLE: Defining an IR scenario
OBJECTIVE: To define to the system an IR scenario to be executed
CONTEXT: The system must be running in off-line mode.
ACTORS: User
RESOURCES: The TSG host software, a workstation
EPISODES: The user chooses to define an IR scenario.

The system enters the IR-scenario-definition state.

The user can either

  • Load a preexisting definition from the library and edit it, or
  • Create a new IR scenario definition and edit it, or
  • Edit the IR scenario already loaded (if one exists).

The IR scenario definition is displayed.

The user may edit the IR scenario general properties, i.e. name, description, and duration. He may also add or remove simulated objects such as a main target and decoys to the IR scenario, and edit each of the defined simulated objects properties.

After completing, the user can ask the system to accept the definition and end the IR scenario definition activity.

In response, the system checks the validity of the defined IR scenario.

If the definition is valid, the system accepts it and exits the IR-scenario-definition state. Otherwise, the system refuses to accept the IR scenario definition. It displays a list of all the IR scenario validity checks that failed in an error message window and returns to the IR-scenario-definition state. The user can fix the definition and ask the system once more to accept the IR scenario definition.

At any time during this operation, the user can ask to save the definition in the library for future use.

In any time during this operation, the user can ask to cancel the operation and exit the IR-scenario-definition state. In response, the system revokes all the changes made by the user, restores the definition that existed prior to the time the operation was initiated, and exits the IR-scenario-definition state.

 
TITLE: Load a preexisting definition from the library
OBJECTIVE: To load a definition of an IR scenario or a simulated object from the library for reuse.
CONTEXT: The system must be running in the edit-IR-scenario state.
ACTORS: User
RESOURCES: The TSG host software, a workstation
EPISODES: The user chooses to load a definition from the library.

The system issues a warning to the user if the current definition (if one exists) was not saved, and asks the user if he wants to save it. The user can choose to ignore the warning, save the definition, or cancel the operation.

If the user did not cancel the operation, the system enters the load-definition state.

The system displays a list of all the definitions that exist in the library and asks the user to choose one. The definition are ordered by name. The definition’s identifying name is given by the user when he decides to save the definition in the library.

The user can browse the list and get a preview of the definition.

After the user maid his choice, the system loads the chosen definition overwriting the existing definition (if one exists) and displays it.

At any time during this operation, the user can ask to cancel the operation, exit the load-definition state, and return to the edit state.

 
TITLE: Creating a new definition
OBJECTIVE: To create a new empty definition of an IR scenario or a simulated object.
CONTEXT: The system must be running in the edit-IR-scenario state.
ACTORS: User
RESOURCES: The TSG host software, a workstation
EPISODES: The user chooses to create a new definition.

The system issues a warning to the user if the current definition (if one exists) was not saved, and asks the user if he wants to save it. The user can choose to ignore the warning, save the definition, or cancel the operation.

If the user does not cancel the operation, the system creates an empty definition overwriting the existing definition (if one exists), initializes it with default values, and displays it.

 
TITLE: Editing simulated objects properties
OBJECTIVE: To define radiometric and dynamic characteristics of a simulated object.
CONTEXT: The system must be running in the edit-IR-scenario state.
ACTORS: User
RESOURCES: The TSG host software, a workstation
EPISODES: The user selects the simulated object he wants to edit, and asks to edit its definition.

The system enters the simulated-object-definition state.

For a complete IR scenarios, the user can either

  • load a preexisting definition from the library and edit it, or
  • create a new simulated object definition and edit it, or
  • edit the definition already loaded (if one exists).

The definition is displayed.

The user may edit the simulated object general properties, i.e. name, and description, dynamic properties, and radiometric properties. In order to do that, the user selects the property he wants to edit, and asks to edit the property definition, meaning, the way the property varies over time.

For each property, the user can choose one of four possible ways to enter a definition. He can either

  1. define the property to be constant, or
  2. enter a mathematical function expression, or
  3. enter a list of pairs of values (time + value), or
  4. load a definition prepared by another simulation tool.

After completing, the user can ask the system to accept the definition and end the simulated object definition activity.

In response, the system checks the validity of the definition.

If the definition is valid, the system accepts it and returns to the IR-scenario-definition state. Otherwise, the system refuses to accept the simulated object definition. It displays a list of all the simulated object validity checks that failed in an error message window and returns to the simulated-object-definition state. The user can fix the definition and ask the system once more to accept the simulated object definition.

At any time during this operation, the user can ask to save the definition in the library for future use.

At any time during this operation, the user can ask to cancel the operation and exit the simulated-object-definition state. In response, the system revokes all the changes made by the user, restores the definition that existed prior to the time the operation was initiated, and exits the simulated-object-definition state.

 
TITLE: Saving a definition in the library
OBJECTIVE: To save a definition of an IR scenario or a simulated object in the library for future use.
CONTEXT: The system must be running in the edit-IR-scenario state.
ACTORS: User
RESOURCES: The TSG host software, a workstation
EPISODES: The user wants to save a definition in the library.

In response, the system prompts the user for approval. The user can change the definition identifying.

The user has to confirm the save operation. After receiving user confirmation, the system issues a warning if a definition with a similar name already exists in the library, and ask the user for approval to overwrite the definition. The user can either confirm the operation or cancel it. If the user confirms the operation, the system saves the definition in the definitions library under the name specified by the user and return to the edit state.

At any time during this operation, the user can ask to cancel the operation, exit the save-definition state, and return to the edit state.

        1. The Infrared Scenario Definition Man-Machine Interface

Figure 15, Figure 16, and Figure 17 illustrate a simplified sub-set of the control and the forms that are used to define an IR scenario. The arrows that are connecting the forms represent the users actions and the transitions between the forms. 

Figure 15: The Infrared Scenario Related Tasks Toolbar 
 

Figure 16: Infrared Scenario Definition Main Form 

Figure 17: Simulated object Definition

A picture is worth a thousand words. The existence of the UIP displays simplifies and shortens the description of the scenarios. A portion of the functionality is implied by the UI. The use of UI conventions and standard UI designs prevent the need to textually describe some of the intended behavior. The UIP demonstrates the scenarios, explains them, complements them, and eventually becomes an indispensable part of them. This explains how the UIP relates to the scenario-based requirement model. The lexicon, additional functional models, the data model, and all the other models can be produced based on the information extracted from the UI.

        1. The Infrared Scenario Definition Lexicon

Leite [39] states that the objective of the LEL is to understand the problem language without requiring full understanding of the problem. In UI prototyping, the goal is to understand the problem. The application lexicon is discovered simultaneously with other requirements information, and at the same time it is reflected in the other requirements models and the UIP. To demonstrate this duality, a subset of the IR scenario related lexicon entries is given in the form presented by Leite in [39].

IR Scenario

  • Notion
  • a sequence of IR images projected on the focal plane of the UUT.
  • represents a real-life IR scene viewed by a seeker.
  • Behavioral Response:
  • An IR scenario is composed of one or more simulated objects and a background.
  • The system maintains a library of IR scenario definitions.
  • The actual IR scenario duration is determined by the user.
  • The system executes only valid IR scenarios, meaning IR scenarios which   have passed the IR scenario validity checks.

IR scenario duration

  • Notion
  • the IR scenario length measured in seconds.
  • the time elapsed since IR scenario starts till IR scenario ends.
  • the total time that IR images will be generated and projected.
  • Behavioral Response:
  • An IR scenario duration should be greater than zero and less than 120 seconds.
  • The IR scenario definition must be valid throughout the IR scenario duration.
  • UUT signal and control system feedback are acquired at a rate of 200Hz throughout the IR scenario duration.

Simulated object

  • Notion
  • simulated entities representing real-life IR images such as targets and decoys.
  • also called IR scenario components.
  • Behavioral Response:
  • An IR scenario is composed of one or more simulated objects and a background.
  • A simulated object is defined by stating the way its dynamic and radiometric properties vary over time.
  • The system maintains libraries of all kinds of simulated objects definitions.

Main target

  • Notion
  • a simulated object representing an aircraft.
  • Behavioral Response:
  • Other simulated objects’ motion may be slaved to the main target’s motion.
  • The system maintains a library of target definitions.

Decoy

  • Notion
  • a simulated object representing a flare.
  • Behavioral Response:
  • A decoy’s motion may be slaved to the main target’s motion.
  • The system maintains a library of decoy definitions.

Dynamic properties

  • Notion
  • a set of properties that defines the location of a simulated object within the field of view throughout an IR scenario.
  • supports two motion modes, independent and slaved.
  • Behavioral Response:
  • A dynamic property set consists of a definition of the azimuth and elevation angles of a simulated object.
  • Each property may be defined independently.
  • A dynamic property set is attributed to a simulated object.

Radiometric properties

  • Notion
  • a set of properties that defines the appearance of the simulated object within the field of view throughout the IR scenario.
  • Behavioral Response:
  • A radiometric property set consists of a definition of the temperature, size, and intensity of a simulated object.
  • Each property may be defined independently.
  • A radiometric property set is attributed to a simulated object.

Motion mode

  • Notion
  • determines the way a simulated object moves within the field of view.
  • determines the way simulated object dynamic properties are defined.
  • may be either independent or slaved.
  • Behavioral Response:
  • A simulated object’s location may be defined independently of other simulated objects or relatively to the main target.
  • When independent, a simulated object’s location is defined relative to the field of view center.
  • When slaved, a simulated object’s location is defined relative to the main target location.

IR Scenario definition validity check

  • Notion
  • an overall test that checks that the complete IR scenario definition is legal, meaning, that it can be executed by the system.
  • checks that the combination of simulated objects is legal and that each of the defined simulated object’s validity check has passed.
  • Behavioral Response:
  • An IR scenario definition validity check is performed automatically when user asks to execute an IR scenario or upon a user’s request.
  • If an IR scenario definition validity check fails, it produces a list of all the checks that failed.
  • The system executes only IR scenarios that successfully pass this check.

Simulated object definition validity check

  • Notion
  • a test that checks that a defined object can be simulated by the system.
  • translates the simulated objects properties to the actual sequence of commands that will be sent to the control system during the IR scenario execution, and checks that none of the commands exceed the controlled element limitations.
  • Behavioral Response:
  • A simulated object definition validity check is performed automatically when a user asks to execute an IR scenario, when a user asks the system to accept the definition, or upon a user’s request.
  • If a simulated object definition validity check fails, it produces a list of all the checks that failed.

As explained earlier, it is recommended to maximize the use of words from the lexicon when describing scenarios. Although in the beginning, the combination of user scenarios and the UI seems to be very clear and understandable, only after reading the lexicon entries description does it become apparent how much information was not included and cannot be deduced from either the UI or the text of the scenarios. The lexicon enhances the clarity and readability of use-cases. It minimizes ambiguities that might be caused by the use of natural language to describe functionality, by basing the scenario descriptions on a minimal set of well-defined, unambiguous terms. It makes sure that the same term is used each time a particular concept is mentioned.

Please note that in addition to the application’s textual taxonomy that is represented by the lexicon there is a visual taxonomy that consists primarily of defined visual conventions. For instance, the use of icons, the use of “...” to indicate that a UI control such as a push-button or a menu option leads to other displays, and the use of group boxes to insinuate the grouping of properties of simulated objects.

The combination of the scenarios, the lexicon, and the UI forms the primary top-most abstract view of the systems requirements, the intended user’s view. It provides some of the information required for understanding why the system is needed and for what the system is good. Having the UIP, the lexicon, and the user scenarios facilitates the understanding of other requirements models. The next level of detailing requires producing more formal functional models and a data model.

        1. The Infrared Scenario Definition Functional Model

The functional models are almost trivial for the IR scenario definition example. Most of the formal functional model is implied by the use of modal dialogs and standard event generators such as push buttons. Thus, there is no need to create such a model for this portion of the system. The data flow diagram given in Figure 19 in the next section presents the data-processing-related functionality.

        1. The Infrared Scenario Data Model and Data Processing Model

The model depicted in Figure 18 presents the IR scenario data model. The IR scenario definition task is information-system related. Therefore, the link between the UI and the data model is straightforward. The model is presented in a way that demonstrates the ease of transition from this model to an object’s static model. 

Figure 18: The Infrared Scenario Data Model

The data-flow diagrams depicted in Figure 19 and Figure 20 presents the flow of information and the required data processing. Each bubble in the diagram represents a functional entity that processes inputs and produces outputs. 

Figure 19: The Infrared Scenario Definition Data-Flow Diagram – Part I 

Figure 20: The Infrared Scenario Definition Data-Flow Diagram – Part II

In order to complete the specification of the data-related requirements, the following has to be specified:

  • the processing performed by each of the bubbles that appears in the data-flow diagrams has to be defined either by means of structured English or by means of pseudo-code,
  • the complete definition of each of the data entities that appear in the entity relation diagram, and the data dictionary,
  • the I/O masks that will be used to filter illegal user inputs,
  • the algorithms for translating the IR scenario definition to commands to the control system,
  • the algorithms for translating the feedback acquired from the control system to the actual IR scenario, and
  • the algorithms needed in order to perform the extensive IR scenario validity checks that are required by the users.
        1. Information about Other System Interfaces that is Implicit by the UI

The system interfaces are listed in Figure 14 as system-specific properties. The UI partially hides these interfaces from the system users. The requirements information about them is discovered through the interface-related user displays, through tracking the flow of information from or to the system, and through the description of the system functionality. The system interfaces are part of the functional entity being specified that make functional specifications happen.

The interface to software tools for generating IR scenarios is an example of an interface that is expressed by the UI. The interface to the control system is an example of an interface that is not expressed directly by the UI but can be specified with the aid of the UIP. The TSG host translates the IR scenario definition to commands for a servo system. Since the servo control is done by another computer, there is a real-time interface between the two computers. The servo system is not aware of the fact that it executes an IR scenario it is just a set of coordinated motion commands. During and after IR scenario execution, the user can ask the system to display feedback information about how well the IR scenario was actually executed. The IR scenario related user interface and operational logic helped define this interface. It helped choose the right hardware because we could estimate the throughput and timing requirements. It revealed the need for algorithms to translate from IR scenario definition to servo commands and from servo readouts to feedback displays.

        1. The Intent Behind the Infrared Scenario Definition Models

A description of the intent is needed in order to complement the requirements models. For the IR scenario definition requirements model, this would include the following:

  • It was decided that the system would support a working manner that resembles the working manner of a compiler-based development environment. That is, a user’s work cycle will look like this: edit <> check (compile) <> run and monitor execution <> edit. The users demanded that the system would not allow executing an IR scenario unless it passes a complete validity check in order to avoid uncertainties that might be caused by the unexpected results of an illegal IR scenario execution. The only uncertainties that are allowed are the ones related to the unit being tested. This demand required us to define an extensive set of validity checks that can be run on an IR scenario before executing it and a mechanism for producing meaningful error messages in case the checks fail. Further, the UI design and the system functionality are based on this work cycle model.
  • The users asked for a visual IR scenario definition language rather than a textual language. They did not want to be obliged to learn the syntax of a textual language and have to remember it for the 10 to 15 years that the system was supposed to serve them. This request seemed very reasonable due to the fact that we knew from past experience with proprietary-tool-specific programming languages that within a few years, any system-specific textual IR scenario definition language is destined to be forgotten.
  • It was decided that the system would allow the users to work in off-line mode and to be able to define and analyze IR scenario definitions without the necessity to use the control system, even to work on a computer other than the host. The intent is to prevent a situation in which the host computer become a critical item because there be only a single host computer and many users. This imposed two requirements on the host computer software that resulted in several non-functional requirements and a few additional user scenarios.
      1. the off-line portions of the system has to support stand-alone working mode as well as be portable to other computers, and
      1. the system has to support the loading of an IR scenario and IR scenario components that were prepared on another computer by the TSG host software.
  • The primary role of the library of IR scenarios and IR scenario components was to enable the user to assemble IR scenarios from a reusable set of definitions. It was planned that the users will initially invest a lot of work in creating a set of reusable components, and in the future, after creating an extensive set of definitions, the task of defining an IR scenario will become much simpler and quicker. Because all the users will have to do is to create a new IR scenario or load an existing one, add or remove predefined components to the IR scenario, check the new combination, and execute it. Consequently, the system is required to possess the functionality required to support this way of work. It has to support the operations of storing and retrieving of IR scenario components as well as complete IR scenarios to the library, and to possess library management capabilities. That is, the system is required to possess some attributes of an information system.
  • We made a distinction between two data sets, the expected IR scenario and the actual IR scenario. The expected IR scenario is the scenario the user defines. The actual IR scenario is the IR scenario performed by the system. It is slightly different from the expected IR scenario because the control system is not optimal. Therefore, it has tracking errors. The expected data set is required in order to analyze the expected IR scenario. The actual data set is required in order to know how well the system performed the IR scenario. The performance of the UUT, is evaluated relative to the actual IR scenario. Users have to know for sure that when they observe a certain response from the UUT, it is due to the IR scenario defined and not due to an artifact caused by the control system. This explains the requirement to continuously monitor and record IR scenario execution in real-time, the requirement to provide data analysis tools that support the comparison and analysis of acquired signals from several sources, and the requirement to provide on-line data monitoring capabilities. That is, the system is required to possess some attributes of a data acquisition system.

In the section describing the intent, it is recommended to identify artifacts of the UIP that are not to be reused.

  • The entire prototype including the UI design was supposed to be thrown away. None of the code that was produced for the UIP was to be reused. We intended to reuse only the conceptual issues of the UI and the underlying models, because we intended to use a different GUI builder for the deliverable system, which will run under a different execution environment.
  • The library management mechanisms had to be redesigned because eventually we decided to use the operating system’s file system instead of a relational database such as the one offered by the prototyping tool we used. We did intend to reuse the underlying data model.

It is clear, from this few examples, that the dimension of intent contributes to the clarity and understandability of the software requirements. It offers the “why” information that is very important for someone who is trying to understand the system’s requirements. Without this information, the requirements specification is incomplete. It is impossible to extract this kind of information from the UIP or other requirements models in the absence of knowledge of how the UIP was developed.

        1. Summary

The IR scenario definition example is a typical one. It represents a pattern that repeated itself for nearly all of the system properties that were prototyped. It demonstrates the usability of the method proposed by this thesis and the quality of the requirements specification it produces. Having such a specification satisfies the needs of all the customers of the requirements specification process that were identified in Section 2.2.7, especially the system users and software maintenance personnel.

Some comments are necessary.

  1. The models that are presented in the previous section require further details in order to be a complete requirements model. However, there is no need to present the additional details in order to demonstrate the concept proposed by this thesis.
  2. Some of the information appears more than once or expressed in more than one way. Although this appears redundant, I believe it is necessary because it reflects multiple perspectives, and because none of the individual models sufficiently represents the system requirements. The top-level abstract views are very understandable. They help to clarify the overall problem being specified. On the other hand, they lack the level of detail and precision needed to complete the requirement specification and form a sufficient basis for the design and implementation of the software. That level of detail is left for the developers to fill out.
  3. For some parts of a system being specified, one or more of the models are minor while others are major. Which is which depends on the modeled aspect. In this example, the functional models are almost trivial, most of the formal functional model is implied by the use of modal dialogs and standard event generators such as push-buttons.
    1. Lessons Learned from the Host Prototype Construction

Before discussing the lessons learned from building the TSG host computer prototype, it should be said that UIP worked for us. The primary benefits of the UIP were the improved communication with the customers, the completion of the requirements elicitation process within a reasonable amount of time, and the validation of the basic conceptual requirements and the concepts on which the design was based. In the course of prototyping, we discovered some major misunderstandings with the customer and many contradicting requirements. Actually, most of the first prototype was thrown away because it was totally wrong. We started the development of the production version from scratch based on a second prototype.

Having said this, we now consider the primary lessons learned from the UIP-oriented development of the host computer.

  • The independence of a development process or a method, was emphasized, due to the way the software development evolved. We started with structured analysis because this was the standard of our team, and ended with OO analysis because it was more appropriate for a Windows-based system that was about to be developed in C++. We prototyped with the aid of a tool for creating database applications, and developed the deliverable system in a C++ application framework with class libraries from various vendors. The requirements issues remained identical throughout. The information about requirements was always there regardless of the methods, modeling technique, and tools we used.
  • The importance of choosing the right prototyping tool became apparent after experimenting with several tools. The last tool we evaluated before choosing Access, was a C++-based, visual OO application framework. Even though it seemed to produce the desired visual results, it was very hard and time consuming to reiterate and refine the prototype with it, because the tool lacked the necessary flexibility and ease of use.
  • Access did not adhere completely to either the specific method or the specific notation or modeling technique we used. In spite of this, we managed to conduct the construction of the prototype in an orderly fashion. Specifically, we first analyzed and specified, and afterwards, we designed and implemented. The most efficient way to do it was to follow the design and construction guidelines imposed by the tool. Attempts to deviate from these guidelines ended up in our fighting the tool instead of profiting from its capabilities. Ultimately, the tool imposed hardly any limitations or constraints on us, because it assisted us in prototyping the very aspects that were important to us.
  • The use of a standard GUI system provides a fruitful source for functional requirements and set standards for the application’s MMI. Standard functionality such as, drag and drop, copy, cut, paste, undo, context-sensitive help, context-sensitive pop-up menus, print and print preview, and many other features, which are never explicitly described in the customer documents, but are considered a de-facto standard, should not be taken lightly. They require additional development effort and, thus, should be addressed by the UIP from the start. This observation is applicable to human engineering considerations as well.
  • The fact that the prototype was thrown away drastically reduced some of the hazards attributed to rapid prototyping in references [1], [2], [9], and [17], especially the risk of delivering the prototype or portions of it instead of the system, and the risk of delivering a quick and dirty implementation of the system. Although, it was very tempting to evolve from the prototype, we resisted the temptation. It was even harder to explain to the users, as well as to team management, why something that looks and feels like the real thing is not and will never be the real thing, and the long term economic benefits of starting from scratch. The prototype actually served as an experimental prototype. We could experiment with the problem before implementing a full scale solution. Starting from scratch gave us the opportunity to start the design based on well-validated requirements.
  • The lack of communication with the users at the beginning of requirements elicitation exposed two serious disadvantages of our manual work methods.
    1. Unless customers sufficiently understand the requirements modeling technique, and the developers can see that they do, it is almost impossible for the developers to be convinced that the customers really mean what they say when they say “This is what we want.” More often than not, they really mean, “It seems that you did a professional job. However, we do not understand most of what you wrote. On the other hand, it does mention most of the things we asked for.” Even when there was someone who could understand and validate the models we presented, he generally expressed only his own opinion. We were interested in hearing the opinions of other user representatives as well.
    1. It is pointless to create complete and consistent models for the wrong requirements. A model, even if complete, consistent, and very formal can easily represent wrong requirements. During the first stages of requirements elicitation, it turned out to be more efficient to use abstract methods in order to understand the overall picture and to find the primary details of this picture. The use of more precise methods can be postponed until later stages, after the overall picture is clear. These methods can be used in order to find and define the finer details. Thus, a top-down hierarchy of requirements modeling techniques is recommended.
  • The classification of the application according to domains with which it shares properties proved to be useful. It helped simplify the complexity of the overall software requirements by classifying and grouping requirements. It allowed us to focus on the system’s unique properties. The more standard properties were much simpler to elicit since we could literally copy requirements, and the users were already familiar with similar properties. It led to reusing requirements, which in turn led to software reuse, because we could purchase standard software components that were intentionally created to address standard domain requirements. It also assisted in making prototype construction systematic. We could conduct a systematic process according to topics, i.e. IR scenario definition, data acquisition, IR scenario execution control, on-line and off-line analysis, etc., and we knew for what to look.
  • It was stated earlier that a prototype, as a tangible model, is known to cause inflation of functional requirements. This effect is so commonplace that it is considered one of the drawbacks of requirements prototyping of which developers have to beware [17]. It would seem that the reuse of standard development environments and requirements from application-related domains would worsen this effect. It is thought that using this approach would lead developers to voluntarily do unnecessary things that they are not required to and will never be paid for, especially if the project budget is already determined. We learned that this is partially true. Users do seem to always ask for more. Prototyping does tend to create this effect. However, when using a conventional software development approach, most of these new requirements are requirements that would normally be discovered much later after the users start to experiment with the system.

    The problem of which requirements to accept and which not, or which requirements are within the scope of the development effort and which requirements are not, is a managerial contractual issue that has to be resolved in any case. Moreover, late discovery of requirements can undermine the system's user friendliness and adequacy to users' needs. Creating a non-user-friendly, non-useful system can kill a project just as easily as being overdue and over budget.

    A requirement discovered later costs more to implement than the same requirement discovered earlier. It is necessary for developers to learn about requirements, even future requirements, as soon as possible, to classify and rank them and come to an agreement with the customer as to which portion of the requirements will be implemented now and which portion will be left for future enhancements. Knowing in advance assists in planning for it. One of the most effective ways to evaluate a design is to test its robustness against anticipated future changes. It would be nice to recalculate project costs after prototyping is done, but since this is usually not possible, developers have to determine, after applying proper reasoning, which of the elicited requirements constitute the system's requirements baseline. Therefore, ranking requirements by priority is important.

  • The chosen prototyping tool determines the prototype construction technique. If the deliverable system development environment is already known, it is recommended to choose tools that work with this environment. If possible, the requirements modeling technique should be chosen in view of this development environment as well. A development environment imposes a construction technique on the deliverable system just as the prototyping tool does on the prototype. When choosing a synergistic combination of environment, tools, and modeling techniques, the transition from models to application and back is more natural. The linkage between the system and its requirements models is clear and understandable, and thus, the maintainability of the system is increased. The use of an OO, MVC-based, event-driven application framework requires compatible modeling techniques in order to create a clear and understandable link between the prototype, the requirements models, the design, and the deliverable system.
  • The UIP is a very useful requirements model. It helps to clarify and demonstrate requirements that are presented by other models. Actually, we even used it to market the system long before it was done. As a preliminary model of the system under construction, the UIP relates to other requirements models similar to the way the final system will. It is a realization of these requirements.
  • A UIP enhances the power of use-cases and scenarios. Scenarios are sometimes hard to envision, particularly when they describe future use-cases. Reading several pages of scenarios can cause a user to lose track of the train of thought. Excessive wordiness sometimes disturbs seeing the overall picture. The UIP mitigates this drawback. It helps demonstrate the scenarios that were implemented by the prototype and allows visualizing scenarios when they are used as a story board. Its structure helps organize scenarios according to a whole-part abstraction. Documenting the UIP with the help of a context-sensitive help system even improves visualization. Since the help system is based on scenarios, it creates a link between scenarios and their visual implementation.
  • The application’s taxonomy is very important. It helps to reveal inconsistencies and contradicting requirements. It contributes to the application’s user friendliness. Poor as well as good choices of terms find their way far beyond one might initially expect. The UI, the requirements models, the requirements statements, the help system, the user’s manual, and the code, in the form of class names, variable names, function names, etc. Poor choices are very hard to change, because a change creates a huge ripple effect and requires so many changes. What seems to be vague or unintuitive now will always remain that way and will undermine the readability of the requirements and design and the system’s ease of use. We had several examples of poor choices of similar terms that expressed totally different concepts. For example, “flare sequencer” and “sequential flare” sound alike, and “flipping mirror”, “switching mirror”, and “transition mirror” sound alike. We always mixed them, never remembered which is which, never remembered the exact meaning, and tended to talk about one when we meant another. These inconsistencies and duplicates originated from the requirements statements and the prototype. They existed because we did not pay proper attention to lexicon issues. The UIP contributed to reducing this problem. It enabled the users to use the terms, view their usage, and comment about their inadequacy when they exercised the prototype and when they discussed the requirements. The design of the UI, which was based on the principle of uniformity, helped reduce duplicates, but it did not prevent us from uniformly using wrong or inadequate terms.
  • The prototype cannot serve as the only requirements model. Other models are needed. Otherwise, extracting requirements from the prototype becomes a reverse engineering activity, even if the prototype were complete.
  • As expected, the UIP did not cover all needed aspects of the system. It covered only the ones that were either expressed explicitly or implicitly by the UI. It was intentionally designed to abstract and hide the inner workings of the system. Nevertheless, the inner workings had to be specified as well. The UIP did not cover issues such as real-time operation, timing constraints, hardware vendors’ proprietary interfaces, spare computation power.
  1. Conclusions

There is a problem in evaluating the benefits of research work in the field of software engineering. We rarely, if ever, get the opportunity to develop a project larger than a toy in more than one way and to compare the implementations. Certainly it will not happen in a $6,000,000 project. Even if we do get such an opportunity, the various implementations are hard to compare for several reasons. The opportunity typically arises only when the first implementation fails and then, if we do better in the second trial, it might be thought that the second way would be more successful. In this case, the first version of the system becomes a very expensive throw-away prototype of the deliverable system. However, conclusions cannot be drawn comparing the methods since it cannot be certain that the second-time success is not due to learning in the first-time failure. What can be done is to exercise a proposed approach over a relatively long period of time on several projects, gather information that can assist in evaluating the benefits of the approach, and compare this information to information gathered from past projects and projects that followed a different approach. This thesis presents the problem encountered in the course of development of a recently delivered project, and the research that was conducted afterwards in order to find an approach that will address some of these problems. Therefore, the required information for a full evaluation is not available. Thus, we are left with a lesson-learned introspective evaluation of a one-time case study.

It was quite surprising to discover that although UIRP is a well-known, widely adopted approach, which is discussed in numerous publications, the problem of capturing the information a prototype contains is almost completely disregarded. This thesis is a first attempt to address this issue, by taking a generalized approach that is based on an overall view of UIRP.

Section 2 identified the characteristics of UIRP. Section 3 offered a method to make prototype construction systematic, identified the primary kinds of information a UIP contains, and offered a method to represent these requirements. Section 4 gave examples from a real medium-scale software development project with real everyday problems, which I believe, are common to other projects. I know I had the feeling of déjà vu more than once.

The TSG host computer UIP example is presented in order to illustrate the reasoning and the systematic process, and an example is given. One example is sufficient to demonstrate the use of the approach proposed by this thesis, since the pattern illustrated by the example repeated itself for the rest of the UI-oriented software requirements.

This thesis does not intend to create the impression that once developers follow the proposed approach, all their elicitation and specification problems are solved. Like most of the work in software engineering, this work is about techniques that assist in dealing with software development problems. Proofs cannot be provided that demonstrate the correctness of this approach. What can be offered is a set of techniques and methods that assist in approaching everyday problems and refraining from making unnecessary mistakes. The real problem of finding what has to be done is still left to be solved anew by developers for each project. I hope that the lessons learned, the examples, and the demonstrated usage given in Section 4 show

  • that this approach is useful,
  • that it yields valuable results from which other prototyping-oriented projects can benefit,
  • that it addresses the problems described in Section 1.2, and
  • that it helps to decrease these problems.

The illustrated modeling method captures all the important properties of the system and provides sufficient information about requirements that adhere to the concept of intent specification [19]. Presenting requirements in additional models would be unnecessary, because as Leveson [19] states, attempts to include everything in a specification are not only impractical, but are wasted effort, and are unlikely to fit the budgets and schedules of industrial projects.

In the course of conducting this research, I encountered several requirements prototyping related issues that were either knowingly disregarded or vaguely discussed. Some of them were mentioned in this thesis. More work has to be done in the following research areas:

  1. how to perform, when needed, a domain analysis in order to idenify what are principal properties and main features of an application domain,
  2. how to decide which properties of the system will be prototyped,
  3. how to decide if the prototype will be reused,
  4. how to select an appropriate prototyping tool,
  5. how to choose an appropriate requirements modeling technique for an application, and
  6. how to apply the method suggested by this thesis to a prototyping-oriented project from the start and evaluate its influence on the development effort.
    1. Acknowledgements

I wish to thank my advisor Prof. Daniel Berry who assisted me in presenting this work. I would also like to thank Dr. Ephi Pinski from the missile division electro optical department at Rafael, who made it possible to publish the work done for the TSG project.

  1. References
  1. J.L. Connell and L.B. Shafer. Object–Oriented Rapid Prototyping, Yourdon Press 1995, Prentice Hall.
  1. J.L. Connell and L.B. Shafer. Structured Rapid Prototyping, Yourdon Press 1989, Prentice Hall.
  1. S.J. Androile. Drexel University, “Fast cheap requirements Prototype, or else!”, IEEE Software, March 1994, pp. 85-87.
  1. J. Bowers and J. Pycock. “Talking Through Design: Requirements and Resistance in Cooperative Prototyping”, Department of Psychology University of Manchester, UK.
  1. D.J. Duke and M.D. Harrison. “Mapping user requirements to implementation”, Software Engineering Journal, January 1995, pp. 13-20.
  1. F. Brooks, The Mythical Man Month, Addison Wesley, Second Edition, Reading, 1995.
  1. K. E. Lantz, The prototyping methodology, Englewood Cliffs, NJ, Prentice-Hall, 1986.
  1. W. Bischofberger and G. Pomberger. Prototyping oriented software development - concepts and tools, Springer-Verlag, 1992.
  1. D. S. Linthicum , “The good, The RAD, and the Ugly”, DBMS, February 1997, pp. 22-24.
  1. H. Gomaa, “The impact of rapid prototyping on specifying user requirements”, ACM SIGSOFT Software Engineering Notes, April 1983, pp. 17-28, Vol. 8, No. 2.
  1. M. Hill. “Parasitic Languages for Requirements”, Proceedings of ICRE ’96 the International Conference on Requirements Engineering, pp. 69- 75.
  1. M. Keil and E. Carmel. “Customer-Developer Links in Software Development”, Communication of the ACM, May 1995, pp. 33-44, Vol. 38, No. 5.
  1. S. Jones and C. Britton. “Early Elicitation and Definition of Requirements for an Interactive Multimedia Information System”, Proceedings of ICRE ‘96 the International Conference on Requirements Engineering, pp. 12-19.
  1. L. Macaulay. “Requirements for Requirements Engineering Techniques”, Proceedings of ICRE ‘96 the International Conference on Requirements Engineering, pp. 157-164.
  1. G. Kösters, H.W. Six and J. Voss. “Combined Analysis of User Interface and Domain Requirements”, Proceedings of ICRE ‘96 the International Conference on Requirements Engineering, pp. 199-207.
  1. E. A. White, H. T. Stump, L. A. Ness and D. W. Schultz. “Project Aurora: Down of a New Way”, Proceedings of ICRE ’96 the International Conference on Requirements Engineering, pp. 165-172.
  1. V. S. Gordon and J. M. Beiman. “Rapid Prototyping: Lessons Learned”, IEEE Software, January 1995, pp. 85-95.
  1. G. Pomberger and G. Blaschek. Object-Oriented and Prototyping in Software Engineering, Prentice Hall Europe, 1996.
  1. N. G. Leveson, “Intent specification: An Approach to Building Human-Centered Specification”, Dept. of Computer Science and Engineering University of Washington.
  1. Luqi, Naval Postgraduate School and M. Ketabchi, “A Computer Aided Prototyping System”, IEEE Software, March 1988, pp. 66–72.
  1. Luqi, Naval Postgraduate School. “Software Evolution Through Rapid Prototyping”, IEEE Computer, May 1989, pp. 13–25.
  1. The UML Document Set, Version 1.0, 13 January, 1997.
  1. Introduction to the Unified Modeling Language for Real-time Systems Design, ver 2.0.
  1. J. Karlsson, Focal Point AB. “Software Requirements Prioritizing”, International Conference on Requirements Engineering, April 1996, pp. 110-116.
  1. E. Gamma, R. Helman, R. Johnson and J. Vlissides. Design Patterns Elements of Reusable Object-Oriented Software, Addison Wesley Longman, January 1997.
  1. T. DeMarco, Structured Analysis and System Specification, Yurdon Press, 1978.
  1. J. Rumbagh, et al, Object Oriented Modeling and Design, Prentice Hall, 1991.
  1. I. Jacobson et al, Object-Oriented Software Engineering, Addison Wesley, 1992.
  1. D. Harel, STATEMATE: A Working Environment for the Development of Complex Reactive Systems, I-Logix Inc. Burlington, MA, 1988.
  1. The Language of STATEMATE, by I-Logix Inc., 1987.
  1. J. Desharnais, M. Frappier, R. Khedri and A. Mili. “Integration of Sequential Scenarios”, Proceedings of the International Conference on Requirements Engineering, 1997, pp. 310–326.
  1. M.B. Dwyer, V. Carr and L. Hines. “Model Checking Graphical User Interface Using Abstraction”, Proceedings of the International Conference on Requirements Engineering, 1997, pp. 244-261.
  1. M.D. Harrison, R. Fields and P.C. Wright. “A Conceptual Framework for Scenario Based Enquiry”, Department of Computer Science, University of York, Heslington, York, September 1997.
  1. K. Breitman and J. Leite. “Using Scenarios to Customize Requirements in the Context of the Draco Paradigm”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil.
  1. J. Leite. “Elicitation of Application Language”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil, July 1989.
  1. J. Leite. “Application Language: A Meta Level Requirements Strategy”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil, August 1990.
  1. J. Leite. “Enhancing the Semantics of Requirements Statements”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil, May 1992.
  1. J. Leite. “Eliciting Requirements Using Natural Language Based Approach: The Case of the Meeting Scheduler Problem”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil, March 1993.
  1. J. Leite, M.C. Leonardi and G Rossi. “Deriving Object-Oriented Specifications from External Scenarios”, Depratamento de Innformatica Pontificia Universidade Catolica do Rio de Janeiro, Brasil.
  1. E. Pinski and D. Sturlesi. “Generation of Dynamic IR Scene for Seekers Testing”, Proceedings of SPIE (The International Society for Optical Engineering) Infrared technology and applications XXIII, vol. 3061 part 2 of 2 parts, April 1997, Orlando Florida, pp. 20-25.
  1. D. Sturlesi and E. Pinski. “Target Scene Generator (TSG) for Infrared seeker evaluation”, (The International Society for Optical Engineering) Technologies for Synthetic Environments: Hardware-in-the-Loop Testing II, vol. 3084, April 1997, Orlando, Florida, pp. 111-119.
  1. L.K. Dillon and S. Sankar. “Guest Editorial Introduction to Special Issue”, IEEE Transaction on Software Engineering, Vol. 23 No. 5 May 1997.
  1. M.M. Lehman. “Programs, Life Cycles, and Laws of Software Evolution”, Proceedings of the IEEE, September 1980, pp. 1060-1076, Vol. 68, No. 9.
  1. B.W. Boehm . “A spiral model of software development and Enhancement”, IEEE Computer, pp. 61-72, May 1988.

 

 

 

 

1 The plain word “scenario” here and in the rest of the thesis means an instance of a use-case. When an infrared scenario is meant, “IR scenario” is used.

2 There is a UIRP approach, called Pretty Screens, that concludes UIRP after implementing the UI screens. There is also a requirements elicitation approach that is based on presenting requirements with the aid of a multi-media representation [34]. Both approaches are based on presenting the visual static aspects of UI and providing a complementary textual or verbal description of the underlying requirements.

3 Do not confuse the users’ work environment with the application domain. The users’ work environment exists even if the users have never used a system like the one which is about to be developed.

4 This is the most likely approach that will be taken when an appropriate prototyping CASE tool is available.

5 Closed-loop simulation is basically the ability to change the simulation in real-time in response to the observed behavior of the unit under test.

Search more related documents:Abstract
Download Document:Abstract

Set Home | Add to Favorites

All Rights Reserved Powered by Free Document Search and Download

Copyright © 2011
This site does not host pdf,doc,ppt,xls,rtf,txt files all document are the property of their respective owners. complaint#nuokui.com
TOP