Survivability: Protecting Your Critical SystemsAuthors:
Robert J. Ellison
David A. Fisher
Richard C. Linger
Howard F. Lipson
Thomas A. Longstaff
Nancy R. Mead
CERT® Coordination Center
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA 15213-3890
© Copyright 1999 by IEEE
Survivability in Network SystemsContemporary large-scale networked systems that are highly distributed improve the efficiency and effectiveness of organizations by permitting new levels of organizational integration. However, such integration is accompanied by elevated risks of intrusion and compromise. These risks can be mitigated by incorporating survivability capabilities into an organization's systems. As an emerging discipline, survivability builds on related fields of study (e.g., security, fault tolerance, safety, reliability, reuse, performance, verification, and testing) and introduces new concepts and principles. Survivability focuses on preserving essential services in unbounded environments, even when systems in such environments are penetrated and compromised.
The New Network Paradigm: Organizational IntegrationFrom their modest beginnings some 20 years ago, computer networks have become a critical element of modern society. These networks not only have global reach, they also have impact on virtually every aspect of human endeavor. Networked systems are principal enabling agents in business, industry, government, and defense. Major economic sectors, including defense, energy, transportation, telecommunications, manufacturing, financial services, health care, and education, all depend on a vast array of networks operating on local, national, and global scales. This pervasive societal dependence on networks magnifies the consequences of intrusions, accidents, and failures, and amplifies the critical importance of ensuring network survivability.
A new network paradigm is emerging. Networks are being used to achieve radical new levels of organizational integration. This integration obliterates traditional organizational boundaries and integrates local operations into components of comprehensive, network-based business processes. For example, commercial organizations are integrating operations with business units, suppliers, and customers through large-scale networks that enhance communication and services. These networks combine previously fragmented operations into coherent processes open to many organizational participants. This new paradigm represents a shift from bounded networks with central control to unbounded networks.
Unbounded networks are characterized by distributed administrative control without central authority, limited visibility beyond the boundaries of local administration, and a lack of complete information about the entire network. At the same time, organizations' dependence on networks is increasing, and the risks and consequences of intrusions and compromises are amplified.
The Definition of SurvivabilityWe define survivability as the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. The term system is used in the broadest possible sense, to include networks and large-scale systems of systems. In particular, the focus of survivability is on unbounded networked systems where traditional security precautions are inadequate.
The term mission refers to a set of very high-level (i.e., abstract) requirements or goals. Missions are not limited to military settings; any successful organization or project must have a vision of its objectives, whether they are expressed implicitly or as a formal mission statement. Judgments as to whether or not a mission has been fulfilled are typically made in the context of external conditions that may affect the achievement of that mission's goals. For example, assume that a financial system shuts down for 12 hours during a period of widespread power outages caused by a hurricane. If the system preserves the integrity and confidentiality of its data and resumes its essential services after the period of environmental stress is over, the system can reasonably be judged to have fulfilled its mission. However, if the same system shuts down unexpectedly for 12 hours under normal conditions (or under relatively minor environmental stress) and deprives its users of essential financial services, the system can reasonably be judged to have failed its mission, even if data integrity and confidentiality are preserved.
Timeliness is a critical factor that is typically included in (or implied by) the very high-level requirements that define a mission. However, timeliness is such an important factor that it is included explicitly in the definition of survivability.
The terms attack, failure, and accident are meant to include all potentially damaging events; but these terms do not partition these events into mutually exclusive or even distinguishable sets. It is often difficult to determine if a particular detrimental event is the result of a malicious attack, a component failure, or an accident. Even if the cause is eventually determined, the critical immediate response cannot depend on speculations about the cause.
Attacks are potentially damaging events orchestrated by an intelligent adversary. Attacks include intrusions, probes, and denials of service. Moreover, the mere threat of an attack can have as severe an impact on a system as an actual occurrence. A system that assumes an overly defensive position because of the threat of an attack may significantly reduce its functionality and divert excessive resources to monitoring the environment and protecting system assets.
Failures and accidents are included as part of survivability as well. Failures are potentially damaging events caused by deficiencies in the system or in an external element on which the system depends. Failures may be due to software design errors, hardware degradation, human errors, or corrupted data. Accidents describe a broad range of randomly occurring and potentially damaging events such as natural disasters. Accidents are often externally generated events (e.g., outside the system) and failures are typically internally generated events.
With respect to system survivability, a distinction between an attack and a failure or accident is less important than the impact of the event. It is often not possible to distinguish between intelligently orchestrated attacks and unintentional or randomly occurring detrimental events. Our survivability approach concentrates on the effect of a potentially damaging event. Typically, for a system to survive, it must react to (and recover from) a damaging effect (e.g., the integrity of a database is compromised) long before the underlying cause is identified. In fact, the reaction and recovery must be successful whether or not the cause is ever determined.
Finally, it is important to recognize that it is the mission fulfillment that must survive, not any particular subsystem or system component. Central to the notion of survivability is the capability of a system to fulfill its mission, even if significant portions of the system are damaged or destroyed. System is often used as a shorthand term for a system with the capability to fulfill a specified mission in the face of attacks, failures, or accidents. Again, it is the mission, not a particular portion of a system, that must survive.
Survivability in Unbounded NetworksThe success of a survivable system depends on the computing environment in which the survivable system operates. The trend in networked computing environments is toward largely unbounded network infrastructures. A bounded system is one in which all of the system's parts are controlled by a unified administration and can be completely characterized and controlled. At least in theory, the behavior of a bounded system can be understood and all of its various parts identified. In an unbounded system there is no unified administrative control over the system's parts. The term administrative control is used here in the strictest sense; it includes the power to impose and enforce sanctions and not simply to recommend an appropriate security policy. In an unbounded system, each participant has an incomplete view of the whole, must depend on and trust information supplied by its neighbors, and cannot exercise control outside its local domain.
An unbounded system can be composed of bounded and unbounded systems connected together in a network. Although the security policy of an individual bounded system cannot be fully enforced outside of the boundaries of its administrative control, the policy can be used as a yardstick to evaluate the security state of that bounded system. Of course, the security policy can be advertised outside of the bounded system; but administrators are severely limited in their ability to compel or persuade outside individuals or entities to follow it. This limitation is particularly true when an unbounded domain spans jurisdictional boundaries, making legal sanctions difficult or impossible to impose.
An unbounded environment exhibits the following properties:
The Internet is an example of an unbounded environment with many client-server network applications. A public Web server and its clients may exist within many different administrative domains on the Internet. Many business-to-business Web-based e-commerce applications depend on conventions within a specific industry segment for inter-operability. Within the Internet, there is little distinction between insiders and outsiders. Everyone who chooses to connect to the Internet is an insider, whether or not they are known to a particular subsystem. This characteristic is the result of the desire, and modern necessity, for connectivity. A company cannot survive in a highly competitive industry without easy and rapid access to its customers, suppliers, and partners. More and more, a company's partners on one project are its competitors on the next, so that trust becomes an extremely complex concept. Trust relationships are continually changing, and in traditional terms may be highly ambiguous. Trust is especially difficult to establish in the presence of unknown users from unknown sources outside one's own administrative control. Legitimate users and attackers are peers in the environment and there is no method to isolate legitimate users from the attackers. In other words, there is no way to bound the environment to legitimate users using only a common administrative policy.
Most security technology depends on certain underlying assumptions about the nature and structure of systems . Generally, these include assumptions that systems are closed with central administrative control, and that the capability exists to observe any desired activity within the system. These assumptions may have been appropriate when systems were isolated islands with highly controlled interfaces to the rest of the world. Today, however, systems are open, with no one person or organization having administrative control, and with any observer¾whether inside or outside the system¾having only limited visibility into the structure, extent, and topology of the system. Lack of central administrative control and an absence of global visibility are properties of the Internet and of distributed applications residing on the Internet.
Much of today's research and practice in computer-systems survivability takes a security-based view of defense against computer attacks. The traditional firewall concept  has been expanded into what are called boundary controllers. For example, a secure DoD domain might use commercial and non-secure products for general-purpose computing, with boundary controllers such as the NRL pump  moving data among domains with differing security policies. The Java security model , in particular the sandbox, applies a similar kind of isolationism to imported Java components so that their functionality can be limited to maintain a secure environment. For survivability, this kind of approach is incomplete because it focuses almost exclusively on prevention (i.e., hardening a system to prevent a break-in or other malicious attack). It does little to help an organization detect an attack or recover after a successful attack has occurred. This security-focused view is also limited by evaluation techniques that concentrate on the relative hardness of a system, as opposed to a system's robustness under attack, its ability to recover compromised capabilities, or its ability to function correctly in the presence of comprised components.
Affordability is always a significant factor in the design, implementation, and maintenance of systems, and encourages sharing of components. That sharing of the technical infrastructure extends to the national infrastructure (e.g., the power grid, the public switched communications networks, and the financial networks) and our national defense. In fact, the trend toward increased sharing of common infrastructure components in the interest of economy virtually ensures that the civilian networked information infrastructure and its vulnerabilities will always be an inseparable part of our national defense .
Practical, affordable systems are almost never 100% custom-built, but rather are constructed from commonly available off-the-self components with internal structures that are well known. The trend toward developing systems through integration and reuse rather than customized design and coding efforts is a cornerstone of modern software engineering. Unfortunately, the intellectual complexity associated with software design, coding, and testing virtually ensures that exploitable bugs can and will be discovered in commercial and public domain products whose internal structures are widely available for analysis. When these products are incorporated as components of larger systems, those systems become vulnerable to attack strategies based on the exploitable bugs. Popular commercial and public-domain components offer attackers a ubiquitous set of targets with well-known and typically unvarying internal structures. This lack of variability among components translates into a lack of variability among systems, and creates vulnerabilities common to all of them. These systems potentially allow a single attack strategy to have a wide-ranging and devastating impact.
Characteristics of Survivable SystemsAs noted essential services are defined as the functions of the system that must be maintained when the environment is hostile, or when failures or accidents occur that threaten the system.
Central to the delivery of essential services is the capability of a system to maintain essential properties (i.e., specified levels of integrity, confidentiality, performance, and other quality attributes). Thus, it is important to define minimum levels of quality attributes that must be associated with essential services. For example, a launch of a missile by a defensive system cannot be effective if the system's performance is slowed to the point that the target is out of range before the system can launch.
The capability to deliver essential services (and maintain the associated essential properties) must be sustained even if a significant portion of the system is incapacitated. Furthermore, this capability should not be dependent upon the survival of a specific information resource, computation, or communication link. In a military setting, essential services might be those required to maintain an overwhelming technical superiority, and essential properties may include integrity, confidentiality, and a level of performance sufficient to deliver results in less than one decision cycle of the enemy. In the public sector, a survivable financial system is one that maintains the integrity, confidentiality, and availability of essential information and financial services, even if particular nodes or communication links are incapacitated because of an intrusion or accident, and that recovers compromised information and services in a timely manner. The financial system's survivability might be judged by using a composite measure of the disruption of stock trades or bank transactions (i.e., a measure of the disruption of essential services).
Key to the concept of survivability, then, is the identification of essential services, and the essential properties that support them, within an operational system. There are typically many services that can be temporarily suspended while a system deals with an attack or other extraordinary environmental condition. Such a suspension can help isolate areas that have been affected by an intrusion and can free up system resources to deal with the intrusion's effects. The overall function of a system should adapt to preserve essential services.
The capability of a survivable system to fulfill its mission in a timely manner is thus linked to its ability to deliver essential services in the presence of an attack, accident, or failure. Ultimately, mission fulfillment must survive, not any portion or component of the system. In some cases if an essential service is lost, it can be replaced by another service that supports mission fulfillment in a different but equivalent way. However, we still believe that the identification and protection of essential services is an important part of a practical approach to building and analyzing survivable systems. As a result, we define essential services to include alternate sets of essential services (perhaps mutually exclusive) that need not be simultaneously available. For example, a set of essential services to support power delivery may include both the distribution of electricity and the operation of a natural gas pipeline.
To maintain their capabilities to deliver essential services, survivable systems must exhibit the four key properties illustrated in Table 1.
Developing Survivability SolutionsSurvivability solutions are best understood as risk-management strategies that first depend on an intimate knowledge of the mission being protected . The mission focus expands survivability solutions beyond purely independent ("one size fits all") technical solutions, even if those technical solutions are broad-based and extend beyond traditional computer security to include fault tolerance, reliability, usability, and so forth. Risk-mitigation strategies first and foremost must be created in the context of a mission's requirements (prioritized sets of normal and stress requirements), and must be based on "what-if" analyses of survival scenarios. Only then can we look toward generic software engineering solutions based on computer security, other software quality attribute analyses, or other strictly technical approaches to support the risk-mitigation strategies.
Hence, survivability depends not only upon the selective use of traditional computer-security solutions, but also upon the development of effective risk-mitigation strategies that are based on scenario-driven "what-if" analyses and contingency planning. "Survival scenarios" positing a wide range of cyber-attacks, accidents, and failures aid in the analyses and contingency planning. However, to reduce the combinatorics inherent in creating representative sets of survival scenarios, these scenarios focus on adverse effects rather than causes. Effects are also of more immediate situational importance than causes, because an organization will likely have to deal with (and survive!) an adverse effect long before a determination is made as to whether the cause was an attack, an accident, or a failure. Awaiting the outcome of a detailed post-mortem to determine the cause, before acting to mitigate the effect, is out of the question when an organization is dealing with the survival of most modern, mission-critical applications.
Contingency (including disaster) planning requires that risk-management decisions and economic tradeoffs be made by executive management, with guidance from technical experts in the application domain, computer security, and other software engineering and related disciplines. Survivability depends at least as much upon the risk-management skills of an organization as it does upon the technical expertise of a cadre of computer-security experts. This is certainly appropriate from an organizational perspective, because business risk management is a primary responsibility of executive management, and not the role of computer-security experts or other technical personnel. Expertise in risk management and the organization's mission reside with that organization's management. The role of the experts in security, the application domain, and other technically relevant areas is to provide executive management with the information necessary to make informed risk-management decisions. Thus, the preparatory steps necessary for survivability must be taken by an organization as a whole, rather than by security experts alone.
Much of the ongoing effort to find survivability solutions relates to the protection of critical national infrastructures. These critical information-based infrastructures include the electric power grid (and other energy infrastructures), transportation, telecommunications, health care, banking and finance, and national defense. Particularly in the United States and Europe, these national critical infrastructures are moving toward an increasing reliance on large-scale highly distributed software systems operating over open, unbounded networks. This greatly increases the efficiency and sophistication of the services these infrastructures provide, but also greatly increases their vulnerability to cyber-attack. In response to a U.S. Presidential Commission report on critical infrastructure protection , the President issued Presidential Decision Directive 63 (PDD-63)  in May 1998. PDD-63 established several new government structures, including the National Infrastructure Protection Center and the Critical Infrastructure Assurance Office. The National Infrastructure Protection Center (NIPC)  operates as part of the FBI, and its mission is to serve as the U.S. government's focal point for threat assessment, warning, investigation, and response for threats or attacks against critical infrastructures. The Critical Infrastructure Assurance Office (CIAO)  is part of the Department of Commerce and has the responsibility of integrating the various sector plans into a National Infrastructure Assurance Plan and coordinating analyses of the U.S. government's own dependencies on critical infrastructures.
The Defense Advanced Research Projects Agency (DARPA) at the U.S. Department of Defense (DoD) funds ongoing national research in information survivability . Research areas include intrusion detection, intrusion tolerant systems, barriers, strategic intrusion assessment, and security architectures (that integrate the previous research areas). The purpose of this research is to support the continued operation of DoD systems in the presence of attack -even if the attack partially succeeds. Two IEEE-sponsored workshops on information survivability brought together leading researchers in the field of survivability along with distinguished experts from the critical infrastructure application domains [11, 12].
The European Dependability Initiative  represents a major research effort in the European Union to address many of the same issues and concerns as the critical infrastructure protection and survivability efforts in the United States, and includes plans for joint EU-US collaboration. The IEEE Computer Society's Technical Committee on Fault-Tolerant Computing and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance have formed a central Web resource for information on the technology of dependable systems, which includes the investigation of dependability despite malicious faults.
New research methods and tools are under development to support survivability solutions. A number of these efforts focus on architectural issues. One proposal motivated by information warfare attacks on the United States infrastructure proposes to designate a portion of the infrastructure as the essential minimum and harden that portion against attacks. A recent Rand study  documents their initial study on that approach. Neumann  documents the first phase of a multi-year effort on survivability. The overall objectives of the project include making the requirements for survivability explicit, identifying functionality whose absence currently prevents adequate satisfaction of those requirements, exploring techniques for designing and developing highly survivable systems and networks, despite the presence of untrustworthy subsystems and untrustworthy participants, and recommending specific architectural structures and structural architectures that can lead to survivable systems and networks capable of either preventing or tolerating a wide range of threats. Sullivan  takes a control systems perspective on survivability. A control system is a mechanism that manages the behavior of a monitored system within its environment in order to maintain the acceptable operation of that system. An adaptive control system is one that can continue to provide control of a system in the face of disruption to elements of the system and control system. Thursisingham  examines survivability requirements for real-time command and control systems. The objectives of this initiative include determining software infrastructure requirements and identifying a migration path for legacy systems.
The CERT Coordination Center is developing a Survivable Network Analysis (SNA) method to evaluate the survivability of systems in the context of attack scenarios. Also under development is a Survivable Systems Simulator that will provide for the analysis, testing, and evaluation of survivability solutions in unbounded networks.
The SNA method permits assessment of survivability strategies at the architecture level. Steps in the SNA method include system mission and architecture definition, identification of essential services, generation of attack scenarios, and survivability analysis of architectural soft spots that are both essential and compromisable. Intrusion scenarios play a key role in the method. SNA results are summarized in a Survivability Map that links recommended survivability strategies for resistance, recognition, and recovery to the system architecture and requirements. Results of applying the SNA method to a subsystem of a large-scale, distributed health care system have been summarized in . Future studies will involve the application of the SNA method to proposed and existing distributed systems for government, defense, and commercial organizations.
The Survivable Systems Simulator being developed by the CERT Coordination Center is based upon a new methodology called "emergent algorithms" . Emergent algorithms produce global effects through cooperative local actions distributed throughout a system. These global effects (which "emerge" from local actions) can support system survivability by allowing a system to fulfill its mission, even though the individual nodes of the system are not survivable. Emergent algorithms can provide solutions to survivability problems that cannot be achieved by conventional means. The Survivable Systems Simulator will allow stakeholders to visualize the effects of specific cyber-attacks, accidents, and failures on a given system or infrastructure. The goal is to enable "what-if" analyses and contingency planning based on simulated walkthroughs of survivability scenarios.
For new systems, survivability imposes constraints on all phases of the software development process. At the requirement and specification level, essential services and assets should be identified. Requirements for resistance, recognition, recovery, and adaptation should also be specified. Architectures should incorporate survivability strategies such as those mentioned in Table 1. Architecture evaluation should treat survivability on par with other properties such as performance, reliability, and maintainability. Reused and commercial off-the-shelf (COTS) products should be selected with survivability in mind. Design and implementation should include techniques for isolation, replication, restoration, and migration of essential services. Correctness verification should ensure faithful implementation of survivability specifications. Testing should assess the reliability of survivability functions operating in cooperation with other system functions. Finally, procedures for system operation should have a substantial impact on survivability. They should include processes for managing survivability policies, responding to attacks, and taking recovery actions.
For existing systems, survivability provides a new perspective on evolution and upgrade. It is often the case that the survivability of existing systems can be improved with additional layers of boundary control¾for example, firewalls and their more sophisticated successors¾and through evolution to redundant (and diverse) hardware and software environments. In addition, administrative procedures for backup, restoration, and migration can be tested and any inadequacies addressed. And survivability features can play a prominent role in the evaluation and selection of vendors and products.
The natural escalation of offensive threats versus defensive countermeasures has demonstrated time and again that no practical systems can be built that are invulnerable to attack. Despite the industry's best efforts, there can be no assurance that systems will not be breached. Thus, the traditional view of information systems security must be expanded to encompass the specification and design of survivability behavior that helps systems survive in spite of attacks. Only then can systems be created that are robust in the presence of attack and are able to survive attacks that cannot be completely repelled.
In short, the nature of contemporary system development dictates that even hardened systems can and will be broken. Survivability solutions should be incorporated into both new and existing systems to help them avoid the potentially devastating effects of compromise and failure due to attack.
Bibliography B. Blakley, "The Emperor's Old Armor," Proceedings of the New Security Paradigms Workshop, IEEE Computer Society Press, 1996.
 H. F. Lipson and D. A. Fisher, "Survivability: A New Technical and Business Perspective on Security," Procedings of the New Security Paradigms Workshop, IEEE Computer Society Press, 1999.
 S. Bellovin and W. Cheswich, Firewalls and Internet Security, Addison-Wesley Publishing Co., Reading, MA, 1994.
 M. H. Kang, A. P. Moore, and I. S. Moskowitz, "Design and Assurance Strategy for the NRL Pump," IEEE Computer, April 1998, p. 50-64.
 G. McGraw and E. Felton, Java Security, John Wiley & Sons, New York, NY, 1997.
 Presidential Commission on Critical Infrastructure Protection, Critical Foundations: Protecting America's Infrastructures, The Report of the Presidential Commission on Critical Infrastructure Protection, October 1997, p. 173.
 Presidential Decision Directive 63 (PDD-63), Protecting America's Critical Infrastructures, http://www.info-sec.com/ciao/63factsheet.html
 National Infrastructure Protection Center (NIPC), http://www.fbi.gov/nipc/welcome.htm
 Critical Infrastructure Assurance Office (CIAO), http://www.info-sec.com/ciao/
 Defense Advanced Research Projects Agency (DARPA), Survivability Research, http://www.darpa.mil/ito/research/is.
 Proceedings of the 1997 Information Survivability Workshop, San Diego, CA, Feb. 12–13, 1997, Software Engineering Institute and IEEE Computer Society, April 1997. Available at: http://www.cert.org/research/
 Proceedings of the 1998 Information Survivability Workshop, Orlando, FL, Oct. 28–30, 1998, Software Engineering Institute and IEEE Computer Society, 1998, http://www.cert.org/research/
 European Dependability Initiative, http://www.cordis.lu/esprit/src/stdepend.htm#vision
 Dependability.org (IEEE CS and IFIP WG 10.4), http://www.dependability.org/
 R. H. Anderson, P. M. Feldman, S. Gerwehr, B. K. Houghton, R. Mesic, J. Pinder, J. Rothenberg, J. R. Chiesa, "Security of the U.S. Defense Information Infrastructure: A Proposed Approach," RAND Report MR-993-OSD/NSA/DARPA, 1999.
 P.G. Neumann, "Practical Architectures for Survivable Systems and Networks: Phase-One Final Report", 1998, http://www.csl.sri.com/neumann/arl-one.html
 K. Sullivan, J. C. Knight, X. Du, S. Geist, "Information Survivability Control Systems", Proceedings of the 1999 International Conference on Software Engineering, Los Angeles, CA, 1999.
 B.M. Thursisingham, J.A. Maurer, "Information Survivability for Evolvable and Adaptable Real-Time Command and Control Systems," IEEE Transactions on Knowledge and Data, January/Februrary 1999, pp. 228-238.
 R. J. Ellison, R. C. Linger, T. Longstaff, and N. R. Mead, "Survivable Network Systems Analysis: A Case Study," IEEE Software, July/August 1999, p. 70-77.
 D. A. Fisher and H.F. Lipson, "Emergent Algorithms—A New Method for Enhancing Survivability in Unbounded Systems," Proceedings of the 32nd Annual Hawaii International Conference on System Sciences, Maui, HI, Jan. 5-8, 1999 (HICSS-32), IEEE Computer Society, 1999, http://www.cert.org/research/
Glossary of Survivability TermsAccidents: A broad range of randomly occurring and potentially damaging events such as natural disasters. Accidents are often externally generated events.
Attack: A series of steps taken by an intelligent adversary to achieve an unauthorized result. . Attacks include intrusions, probes, and denials of service.
Adaptation Services: Survivable system functions provided to continually improve a system's capability to deliver essential services, typically by improving resistance, recognition, and recovery capabilities.
Essential Services: Services to users of a system that must be provided even in the presence of attacks, failures, or accidents.
Failure: A potentially damaging event caused by deficiencies in the system or in an external element on which the system depends. Failures may be due to software design errors, hardware degradation, human errors, or corrupted data.
Recognition Services: Survivable system functions that must detect attempted and successful attacks.
Recovery Services: System functions to support the restoration of services after an attack has occurred. Recovery also contributes to a system's ability to maintain essential services during an attack.
Resistance Services: Survivable system functions that make attacks difficult and costly.
Survivability: The capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.
Unbounded Network: Computer system or systems characterized by distributed administrative control without central authority, limited visibility beyond the boundaries of local administration, and lack of complete information by the network.
For More InformationFor information on how the CERT/CC can provide a Survivable Systems Analysis (SSA) for your organization, contact
CERT Coordination Center
Software Engineering Institute
Carnegie Mellon University
Office: 412 / 268-7783
FAX: 412 / 268-6989
CERT hotline +1 412-268-7090
World Wide Web: http://www.cert.org
Return to top of the page
Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to use any copyrighted component of this work in other works must be obtained from IEEE. Contact: Manager, Copyrights and Permissions, IEEE Service Center, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331, USA. Telephone: 908 / 562-3966.
Copyright 2001 by Carnegie Mellon University
Last updated October 11, 2005