November 3-6, 2014
Certification of software may play multiple roles, both intended and unintended, and both beneficial and damaging. Some of these roles are unrelated to what the name "certification" is about, i.e., creating certainties; for those that are related to it, we should usually talk about creating confidence rather than certainty. With an eye on this socio-technical landscape, this talk will attempt a map of the logical links between the evidence collected through assessment practices and the confidence in reliability, safety or security that users wish to derive from the evidence. Central issues are the links between deterministic and probabilistic claims, their scopes of validity, and the evidence behind them. Probing these links raises useful questions about unstated assumptions, possible means for giving confidence more solid bases, and how these could affect the practice of certification.
In recent years, assurance cases have been gaining popularity across various domains, such as the railway, aeronautics, automotive and medical domains, as an important tool in the establishment of system safety. The assurance case is essentially an argument for the existence of a certain system property. The confidence that we may place in the validity of any such argument plays an important role in the decision-making process, both for the developer and the regulator. However, even though there is increasing interest in this research topic, it seems that there is no consensus on what the precise definition of assurance case confidence is, and therefore the approaches for its modeling and measurement vary.The concept of an assurance case argument is based on the ideas presented by Toulmin in his groundbreaking work . He outlined a scheme for the layout of arguments, but did not provide guidelines for formal argument evaluation. Here we look into some works extending his ideas to incorporate a theory of argument evaluation, and offer our insights on what the implications are for the definition of confidence, as well as an approach that would prove suitable for its modeling.In essence, when we reason about the confidence one might place in an argument, we are trying to establish how well the argument corresponds to the notions of a ‘‘good argument’’, as well as taking into account any and all sources of uncertainty that are inherent when we are faced with imperfect information. Even so, what we ultimately measure is not how true the conclusions of the argument are, but instead, how justifiable they are given our current knowledge.
During the last decade, common automotive vehicles received an increased number of electronic components running embedded software, enabling them to become more efficient, comfortable, safer and usable; unfortunately this is also causing an increase of vehicle recalls to fix software defects. Aware of this problem, the automotive industry has been adopting software development and safety standards from other industries, as well as developing its own. However, none has been used so-far as base for certification processes, meaning that there is no obligation to have independent parties attesting that the best development, safety or architectural practices are being followed. With the publication of ISO 26262, manufacturers, suppliers and automotive organizations are finally able to agree on a common schema for the certification of automotive software. This paper describes the most relevant standards that have been used in the automotive industry, namely for software development, stressing the development and safety lifecycle. It then addresses software certification and some of the main challenges it poses to this industry.
Modern vehicles are definitely “software-intensive” systems (someone says “computers with wheels”). Software is now implementing and/or controlling a growing number of traditional functions as well as new innovative functions, made possible only by software. Software is also taking charge of functions traditionally controlled by the driver. It is not surprising that a growing number of these functions are “safety related” at various level of risk depending on the possible hazards they are related to. To face such a situation, the automotive community is adopting two standards addressing the way software-intensive systems are developed: Automotive SPICE and the Functional Safety standard ISO 26262. In this paper, starting from the experience of the author in leading Automotive SPICET Assessments in safety-critical contexts, the mutual influences between Automotive SPICE and ISO 26262 as well as the opportunities and challenges related to the need of comply with both of these standards are discussed and possible effective way to integrate them are proposed.
For being aligned with the growing demands in cer- tification, companies in embedded systems have gradually defined generic organizational process to support maturity models, e.g., CMMI or SPICE. However, such organizational process is not appropriate to fit project specific context and characteristics. It is usual for companies to customize the organizational process with respect to particularities of each project. The customization, done manually in general, is time consuming and error prone regarding the large amount of data to manipulate. This paper presents an automated solution for generating process models with regards to specific projects in a certification purpose. We develop a modeling framework, which enable tailoring of an organizational process based on the current context and characteristics of each project as well as the ISO 26262 and SPICE constraints. The framework uses an existing process measurement artifact to automatically assess project achievement during product development. We evaluate our approach on an automotive body controller project. The results match the expectations of the standards and the current company practices, demonstrating its feasibility and effectiveness.
Safety-critical systems (such as those in automotive, avionics, railway domains), where a failure can result in accidents with fatal consequences, need to certify their products as per domain-specific safety standards. Safety certification is not only time consuming but also consumes the project budget. Adopting a reuse oriented development and certification paradigm can be highly beneficial in such systems. Though there had been several research efforts on cost models in the context of software reuse as well as software product lines, none of them have addressed the certification related costs. In this paper, we present a cost model for software product lines, which incorporates certification costs as well. We first propose a mathematical model to represent a Software Product Line and then present an approach to compute, using optimisation theory, the set of artifacts that compose a new product assuring an expected level of confidence (that is, a certain Safety Integrity Level) at an optimised cost level. The proposed approach can help developers and software project managers in making efficient design decisions in relation to the choice of the components for a new product variant development as part of a product line.
Engineering activities in the operation and maintenance phase of safety-critical systems are becoming increasingly important. The ever more rising software complexity in terms of an amount of implemented functions led to a proportional increase of various change demands. Most of these demands are initiated to repair the system from defects, i.e., due to design faults not identified in the development for example. Maintaining changes in the operation phase can be very cost-intensive, because regulations of safety standards recommend to re-verify and re-validate the system in most cases, in order to ensure that the systems integrity is not compromised by the incorporated changes.In this paper, we describe an approach to perform changes on software in the operation and maintenance phase of systems lifecycle. To prevent the impact of changes on systems integrity, certain design limitations are set, so that controlled types of changes are permitted only. Furthermore, since also in cases of strong design limitations the systems integrity can be compromised, a support for systems modelling and analysis has been provided. The modelling captures certain functional and non-functional aspects of the system, which are then analyzed to decide whether changes can be performed or not. The main outcome here is that specific types of changes can be maintained without having an impact on systems integrity and therefore without requiring an extensive re-verification and re-validation. We report on possible improvements in costs of changes, by considering several industrial use cases and their typical change scenarios in the maintenance phase.
Safety-critical systems represent those systems whose failure may lead to catastrophic consequences on users and environment. Several methods and hazard analysis, and standards in different disciplines, have been defined in order to assure the systems have been designed in compliance with safety requirements. The increasing presence of automatic controlling operation, the massive use of networks to transfer data and information, and the human operations introduce a new security concern in safety-critical systems. Security issues (threats) do not only have direct impact on systems availability, integrity and confidentiality, but they also can influence the safety aspects of the safety critical systems. Today taking into account malicious actions through intrusion into communications and computer control systems become a critical and not negligible step during the design and the assessment of safety-critical systems. The paper describes a general methodology to support the assessment of safety-critical system with respect to security aspects. The methodology is based on a library of security threats. Such threats, identified during the work, have been mapped to the NIST security controls. Then, a preliminary representation of the library in the aerospace domain is shown through some simple example, together with some considerations on the relation between security issues and safety impact as a valuable addition to the safety critical systems certification process.
SafetyADD is a tool for working with safety contracts for software components. Safety contracts tie safety related properties, in the form of guarantees and assumptions, to a component. A guarantee is a property the component promises to hold, on the premise that the environment provides its associated assumptions. When multiple software components are integrated in a system, SafetyADD is used to verify that the guarantees and assumptions match when there are safety-related dependencies between the components.The initial goal of SafetyADD is to investigate how safety contracts can be managed and used efficiently within the software design process. It is implemented as an Eclipse plugin. The tool has two main functions. It gives designers of software components a way to specify safety contracts, which are stored in an XML format and shall be distributed together with the component. It also gives developers who integrate multiple software components in their systems a tool to verify that the safety contracts are fulfilled. A graphical editor is used to connect guarantees and assumptions for dependent components, and an algorithm traverses all such connections to make sure they match.
The lack of tools that can fit in existing developmentpractices and processes hampers the adoption of Software Fault Injection (SFI) in real-world projects. This paper presents an ongoing work towards an SFI tool integrated in the Eclipse IDE, and designed for usability.