Publications

  1. Stefano Russo.
    Finding a Way in the Model Driven Jungle: Invited Keynote Talk.
    In Proceedings of the 9th India Software Engineering Conference. 2016, 13–15.
    Abstract Model-driven concepts have been introduced in software engineering methodologies since many years. They were not really new, as engineers have always been using models, but they have been tailored to engineering software. Research in the area has progressed in many directions: languages, processes, standards, technologies, tools. While they have proved to be effective in some application sectors, such as for embedded systems, it is hard to find documented success stories for real-world systems in many other fields. Indeed, there is some skepticism on their applicability for large-scale and for critical industrial systems, which have high complexity and/or high costs of verification and validation: many companies still consider them risky. Full comprehension of risks, costs and benefits is not easy to achieve. A crucial factor is that their adoption requires changes in consolidated processes, and advanced engineering skills âĂş focus is on modeling, rather than on implementation. These implications are often underestimated. URL, DOI BibTeX

    @inproceedings{cecris-2016-01,
    	author = "Russo, Stefano",
    	title = "Finding a Way in the Model Driven Jungle: Invited Keynote Talk",
    	booktitle = "Proceedings of the 9th India Software Engineering Conference",
    	series = "ISEC '16",
    	year = 2016,
    	isbn = "978-1-4503-4018-2",
    	location = "Goa, India",
    	pages = "13--15",
    	numpages = 3,
    	url = "http://doi.acm.org/10.1145/2856636.2876472",
    	doi = "10.1145/2856636.2876472",
    	acmid = 2876472,
    	publisher = "ACM",
    	address = "New York, NY, USA",
    	abstract = "Model-driven concepts have been introduced in software engineering methodologies since many years. They were not really new, as engineers have always been using models, but they have been tailored to engineering software. Research in the area has progressed in many directions: languages, processes, standards, technologies, tools. While they have proved to be effective in some application sectors, such as for embedded systems, it is hard to find documented success stories for real-world systems in many other fields. Indeed, there is some skepticism on their applicability for large-scale and for critical industrial systems, which have high complexity and/or high costs of verification and validation: many companies still consider them risky. Full comprehension of risks, costs and benefits is not easy to achieve. A crucial factor is that their adoption requires changes in consolidated processes, and advanced engineering skills âĂş focus is on modeling, rather than on implementation. These implications are often underestimated."
    }
    
  1. Antonio Bovenzi, Francesco Brancati, Stefano Russo and Andrea Bondavalli.
    An OS-level Framework for Anomaly Detection in Complex Software Systems.
    In IEEE Transactions on Dependable and Secure Computing 12(3). May 2015, 366-372.
    Abstract Revealing anomalies at the operating system (OS) level to support online diagnosis activities of complex software systems is a promising approach when traditional detection mechanisms (e.g., based on event logs, probes and heartbeats) are inadequate or cannot be applied. In this paper we propose a configurable detection framework to reveal anomalies in the OS behavior, related to system misbehaviors. The detector is based on online statistical analyses techniques, and it is designed for systems that operate under variable and non-stationary conditions. The framework is evaluated to detect the activation of software faults in a complex distributed system for Air Traffic Management (ATM). Results of experiments with two different OSs, namely Linux Red Hat EL5 and Windows Server 2008, show that the detector is effective for mission-critical systems. The framework can be configured to select the monitored indicators so as to tune the level of intrusivity. A sensitivity analysis of the detector parameters is carried out to show their impact on the performance and to give to practitioners guidelines for its field tuning. DOI BibTeX

    @inproceedings{6847216,
    	author = "Bovenzi, Antonio and Brancati, Francesco and Russo, Stefano and Bondavalli, Andrea",
    	booktitle = "IEEE Transactions on Dependable and Secure Computing",
    	title = "An OS-level Framework for Anomaly Detection in Complex Software Systems",
    	year = 2015,
    	volume = 12,
    	number = 3,
    	pages = "366-372",
    	keywords = "operating systems (computers);software fault tolerance;statistical analysis;Linux Red Hat EL5;OS-level framework;Windows Server 2008;air traffic management;anomaly detection;complex distributed system;complex software systems;configurable detection framework;mission-critical systems;online statistical analysis techniques;operating system level;Detectors;Linux;Monitoring;Operating systems;Probes;Software systems;Anomaly-detection;mission-critical systems;operating system;system monitoring",
    	doi = "10.1109/TDSC.2014.2334305",
    	issn = "1545-5971",
    	month = "May",
    	abstract = "Revealing anomalies at the operating system (OS) level to support online diagnosis activities of complex software systems is a promising approach when traditional detection mechanisms (e.g., based on event logs, probes and heartbeats) are inadequate or cannot be applied. In this paper we propose a configurable detection framework to reveal anomalies in the OS behavior, related to system misbehaviors. The detector is based on online statistical analyses techniques, and it is designed for systems that operate under variable and non-stationary conditions. The framework is evaluated to detect the activation of software faults in a complex distributed system for Air Traffic Management (ATM). Results of experiments with two different OSs, namely Linux Red Hat EL5 and Windows Server 2008, show that the detector is effective for mission-critical systems. The framework can be configured to select the monitored indicators so as to tune the level of intrusivity. A sensitivity analysis of the detector parameters is carried out to show their impact on the performance and to give to practitioners guidelines for its field tuning."
    }
    
  2. Fabio Scippacercola, Roberto Pietrantuono, Stefano Russo, András Zentai and Nuno Pedro Silva.
    ISSRE 2015 Supplementary Proceedings.
    In WOSOCER 2015, 5rd IEEE International Workshop on Software Certification. 2015, 174–181.
    Abstract Failure Mode and Effects Analysis (FMEA) is a well-known technique for evaluating the effects of potential failure modes of components of a system. It is a crucial reliability and safety engineering activity for critical systems requiring systematic inductive reasoning from postulated component failures. We present an approach based on SysML and Prolog to support the tasks of an FMEA analyst. SysML block diagrams of the system under analysis are annotated with valid and error states of components and of their input flows, as well as with the logical conditions that may determine erroneous outputs. From the annotated model, a Prolog knowledge base is automatically built, transparently to the analyst. This can then be queried, e.g., to obtain the flows’ and blocks’ states that lead to system failures, or to trace the propagation of faults. The approach is suited for integration in modern model-driven system design processes. We describe a proof-of-concept implementation based on the Papyrus modeling tool under Eclipse, and show a demo example. BibTeX

    @inproceedings{cecris-2015-04,
    	author = "Scippacercola, Fabio and Pietrantuono, Roberto and Russo, Stefano and Zentai, Andr{\'a}s and Silva, Nuno Pedro",
    	booktitle = "WOSOCER 2015, 5rd IEEE International Workshop on Software Certification",
    	pages = "174–181",
    	publisher = "IEEE",
    	title = "{ISSRE 2015 Supplementary Proceedings}",
    	year = 2015,
    	abstract = "Failure Mode and Effects Analysis (FMEA) is a well-known technique for evaluating the effects of potential failure modes of components of a system. It is a crucial reliability and safety engineering activity for critical systems requiring systematic inductive reasoning from postulated component failures. We present an approach based on SysML and Prolog to support the tasks of an FMEA analyst. SysML block diagrams of the system under analysis are annotated with valid and error states of components and of their input flows, as well as with the logical conditions that may determine erroneous outputs. From the annotated model, a Prolog knowledge base is automatically built, transparently to the analyst. This can then be queried, e.g., to obtain the flows’ and blocks’ states that lead to system failures, or to trace the propagation of faults. The approach is suited for integration in modern model-driven system design processes. We describe a proof-of-concept implementation based on the Papyrus modeling tool under Eclipse, and show a demo example."
    }
    
  3. Fabio Scippacercola, Roberto Pietrantuono, Stefano Russo and András Zentai.
    Model-in-the-Loop Testing of a Railway Interlocking System.
    In Philippe Desfray, Joaquim Filipe, Slimane Hammoudi and Luís Ferreira Pires (eds.). Model-Driven Engineering and Software Development 580. 2015, 375-389.
    Abstract Model-driven techniques offer new solutions to support development and verification and validation (V&V) activities of software-intensive systems. As they can reduce costs, and ease the certification process as well, they are attractive also in safety-critical domains. We present an approach for Model-in-the-loop testing within an OMG-based model-driven process, aimed at supporting system V&V activities. The approach is based on the definition of a model of the system environment, named Computation Independent Test (CIT) model. The CIT enables various forms of system test, allowing early detection of design faults. We show the benefits of the approach with reference to a pilot project that is part of a railway interlocking system. The system, required to be CENELEC SIL-4 compliant, has been provided by the Hungarian company Prolan Co. in the context of an industrial-academic partnership. URL, DOI BibTeX

    @inproceedings{cecris-2015-03,
    	year = 2015,
    	isbn = "978-3-319-27868-1",
    	booktitle = "Model-Driven Engineering and Software Development",
    	volume = 580,
    	series = "Communications in Computer and Information Science",
    	editor = "Desfray, Philippe and Filipe, Joaquim and Hammoudi, Slimane and Pires, Luís Ferreira",
    	doi = "10.1007/978-3-319-27869-8_22",
    	title = "Model-in-the-Loop Testing of a Railway Interlocking System",
    	url = "http://dx.doi.org/10.1007/978-3-319-27869-8_22",
    	publisher = "Springer International Publishing",
    	keywords = "Model-Driven Engineering; Safety-critical systems; Model-Driven Testing",
    	author = "Scippacercola, Fabio and Pietrantuono, Roberto and Russo, Stefano and Zentai, András",
    	pages = "375-389",
    	language = "English",
    	abstract = "Model-driven techniques offer new solutions to support development and verification and validation (V&V) activities of software-intensive systems. As they can reduce costs, and ease the certification process as well, they are attractive also in safety-critical domains. We present an approach for Model-in-the-loop testing within an OMG-based model-driven process, aimed at supporting system V&V activities. The approach is based on the definition of a model of the system environment, named Computation Independent Test (CIT) model. The CIT enables various forms of system test, allowing early detection of design faults. We show the benefits of the approach with reference to a pilot project that is part of a railway interlocking system. The system, required to be CENELEC SIL-4 compliant, has been provided by the Hungarian company Prolan Co. in the context of an industrial-academic partnership."
    }
    
  4. F Scippacercola, R Pietrantuono, S Russo and A Zentai.
    Model-Driven Engineering of a Railway Interlocking System.
    In Proc. of MODELSWARD 2015, 3rd International Conference on Model-Driven Engineering and Software Development (MODELSWARD). February 2015, 509-519.
    Abstract Model-Driven Engineering (MDE) promises to enhance system development by reducing development time, and increasing productivity and quality. MDE is gaining popularity in several industry sectors, and is attractive also for critical systems where they can reduce efforts and costs for verification and validation (V&V), and can ease certification. Incorporating model-driven techniques into a legacy well-proven development cycle is not simply a matter of placing models and transformations in the design and implementation phases. We present the experience in the model-driven design and V&V of a safety-critical system in the railway domain, namely the Prolan Block, a railway interlocking system manufactured by the Hungarian company Prolan Co., required to be CENELEC SIL-4 compliant. The experience has been carried out in an industrial- academic partnership within the EU project CECRIS. We discuss the challenges and the lessons learnt in this pilot project of introducing MD design and testing techniques into the company's traditional V-model process. BibTeX

    @inproceedings{cecris-2015-02,
    	author = "Scippacercola, F. and Pietrantuono, R. and Russo, S. and Zentai, A.",
    	booktitle = "Proc. of MODELSWARD 2015, 3rd International Conference on Model-Driven Engineering and Software Development (MODELSWARD)",
    	title = "Model-Driven Engineering of a Railway Interlocking System",
    	publisher = "SCITEPRESS.",
    	isbn = "978-989-758-083-3.",
    	year = 2015,
    	month = "Feb",
    	pages = "509-519",
    	abstract = "Model-Driven Engineering (MDE) promises to enhance system development by reducing development time, and increasing productivity and quality. MDE is gaining popularity in several industry sectors, and is attractive also for critical systems where they can reduce efforts and costs for verification and validation (V\&V), and can ease certification. Incorporating model-driven techniques into a legacy well-proven development cycle is not simply a matter of placing models and transformations in the design and implementation phases. We present the experience in the model-driven design and V\&V of a safety-critical system in the railway domain, namely the Prolan Block, a railway interlocking system manufactured by the Hungarian company Prolan Co., required to be CENELEC SIL-4 compliant. The experience has been carried out in an industrial- academic partnership within the EU project CECRIS. We discuss the challenges and the lessons learnt in this pilot project of introducing MD design and testing techniques into the company's traditional V-model process."
    }
    
  5. V Bonfiglio, L Montecchi, F Rossi, P Lollini, A Pataricza and A Bondavalli.
    Executable Models to Support Automated Software FMEA.
    In High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on. January 2015, 189-196.
    Abstract Safety analysis is increasingly important for a wide class of systems. In the automotive field, the recent ISO26262 standard foresees safety analysis to be performed at system, hardware, and software levels. Failure Modes and Effects Analysis (FMEA) is an important step in any safety analysis process, and its application at hardware and system levels has been extensively addressed in the literature. Conversely, its application to software architectures is still to a large extent an open problem, especially concerning its integration into a general certification process. The approach we propose in this paper aims at performing semi-automated FMEA on component-based software architectures described in UML. The foundations of our approach are model-execution and fault-injection at model-level, which allows us to compare the nominal and faulty system behaviors and thus assess the effectiveness of safety countermeasures. Besides introducing the detailed workflow for SW FMEA, the work in this paper focuses on the process for obtaining an executable model from a component-based software architecture specified in UML. DOI BibTeX

    @inproceedings{cecris-2015-01,
    	author = "Bonfiglio, V. and Montecchi, L. and Rossi, F. and Lollini, P. and Pataricza, A. and Bondavalli, A.",
    	booktitle = "High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on",
    	title = "Executable Models to Support Automated Software FMEA",
    	year = 2015,
    	month = "Jan",
    	pages = "189-196",
    	abstract = "Safety analysis is increasingly important for a wide class of systems. In the automotive field, the recent ISO26262 standard foresees safety analysis to be performed at system, hardware, and software levels. Failure Modes and Effects Analysis (FMEA) is an important step in any safety analysis process, and its application at hardware and system levels has been extensively addressed in the literature. Conversely, its application to software architectures is still to a large extent an open problem, especially concerning its integration into a general certification process. The approach we propose in this paper aims at performing semi-automated FMEA on component-based software architectures described in UML. The foundations of our approach are model-execution and fault-injection at model-level, which allows us to compare the nominal and faulty system behaviors and thus assess the effectiveness of safety countermeasures. Besides introducing the detailed workflow for SW FMEA, the work in this paper focuses on the process for obtaining an executable model from a component-based software architecture specified in UML.",
    	keywords = "Unified Modeling Language;object-oriented methods;safety-critical software;software architecture;software fault tolerance;ISO26262 standard;UML;Unified Modeling Language;automated software FMEA;component-based software architecture;failure mode and effect analysis;fault-injection approach;model-execution approach;safety analysis;semiautomated FMEA;Analytical models;Computer architecture;Iron;Safety;Software;Standards;Unified modeling language;ALF;component-based;executable model;fUML;model-implemented fault-injection;software safety analysis",
    	doi = "10.1109/HASE.2015.36"
    }
    
  1. Nicola Nostro, Andrea Bondavalli and Nuno Silva.
    Adding Security Concerns to Safety Critical Certification.
    In Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on. November 2014, 521-526.
    Abstract Safety-critical systems represent those systems whose failure may lead to catastrophic consequences on users and environment. Several methods and hazard analysis, and standards in different disciplines, have been defined in order to assure the systems have been designed in compliance with safety requirements. The increasing presence of automatic controlling operation, the massive use of networks to transfer data and information, and the human operations introduce a new security concern in safety-critical systems. Security issues (threats) do not only have direct impact on systems availability, integrity and confidentiality, but they also can influence the safety aspects of the safety critical systems. Today taking into account malicious actions through intrusion into communications and computer control systems become a critical and not negligible step during the design and the assessment of safety-critical systems. The paper describes a general methodology to support the assessment of safety-critical system with respect to security aspects. The methodology is based on a library of security threats. Such threats, identified during the work, have been mapped to the NIST security controls. Then, a preliminary representation of the library in the aerospace domain is shown through some simple example, together with some considerations on the relation between security issues and safety impact as a valuable addition to the safety critical systems certification process. DOI BibTeX

    @inproceedings{cecris-2014-10,
    	author = "Nostro, Nicola and Bondavalli, Andrea and Silva, Nuno",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on",
    	title = "Adding Security Concerns to Safety Critical Certification",
    	year = 2014,
    	month = "Nov",
    	pages = "521-526",
    	abstract = "Safety-critical systems represent those systems whose failure may lead to catastrophic consequences on users and environment. Several methods and hazard analysis, and standards in different disciplines, have been defined in order to assure the systems have been designed in compliance with safety requirements. The increasing presence of automatic controlling operation, the massive use of networks to transfer data and information, and the human operations introduce a new security concern in safety-critical systems. Security issues (threats) do not only have direct impact on systems availability, integrity and confidentiality, but they also can influence the safety aspects of the safety critical systems. Today taking into account malicious actions through intrusion into communications and computer control systems become a critical and not negligible step during the design and the assessment of safety-critical systems. The paper describes a general methodology to support the assessment of safety-critical system with respect to security aspects. The methodology is based on a library of security threats. Such threats, identified during the work, have been mapped to the NIST security controls. Then, a preliminary representation of the library in the aerospace domain is shown through some simple example, together with some considerations on the relation between security issues and safety impact as a valuable addition to the safety critical systems certification process.",
    	keywords = "Aircraft;Control systems;Libraries;NIST;Safety;Security;Cyber Threats;Safety;Safety-critical system;Security;Threats Library",
    	doi = "10.1109/ISSREW.2014.56"
    }
    
  2. Nuno Silva and Marco Vieira.
    Experience Report: Orthogonal Classification of Safety Critical Issues.
    In Software Reliability Engineering (ISSRE), 2014 IEEE 25th International Symposium on. November 2014, 156-166.
    Abstract Techniques to classify defects have been used for decades, providing relevant information on how to improve systems. Such techniques heavily rely on human experience and have been generalized to cover different types of systems at different maturity levels. However, their application to safety-critical systems development and operation phases neither is very common, or at least not spread publicly, nor disseminated in the industrial and academic worlds. This practical experience report presents the results and conclusions from applying a mature Orthogonal Defect Classification (ODC) to a large set of safety-critical issues. The work is based on the analysis of more than 240 real issues (defects) identified during all the lifecycle phases of 4 safety-critical systems in the aerospace and space domains. The outcomes reveal the challenges in properly classifying this specific type of issues with the broader ODC approach. The difficulties are identified and systematized and specific proposals for improvement are proposed. DOI BibTeX

    @inproceedings{cecris-2014-08,
    	author = "Silva, Nuno and Vieira, Marco",
    	booktitle = "Software Reliability Engineering (ISSRE), 2014 IEEE 25th International Symposium on",
    	title = "Experience Report: Orthogonal Classification of Safety Critical Issues",
    	year = 2014,
    	month = "Nov",
    	pages = "156-166",
    	abstract = "Techniques to classify defects have been used for decades, providing relevant information on how to improve systems. Such techniques heavily rely on human experience and have been generalized to cover different types of systems at different maturity levels. However, their application to safety-critical systems development and operation phases neither is very common, or at least not spread publicly, nor disseminated in the industrial and academic worlds. This practical experience report presents the results and conclusions from applying a mature Orthogonal Defect Classification (ODC) to a large set of safety-critical issues. The work is based on the analysis of more than 240 real issues (defects) identified during all the lifecycle phases of 4 safety-critical systems in the aerospace and space domains. The outcomes reveal the challenges in properly classifying this specific type of issues with the broader ODC approach. The difficulties are identified and systematized and specific proposals for improvement are proposed.",
    	keywords = "Industries;Random access memory;Safety;Satellites;Software;Standards;Testing;ODC;classification;defect;issue;orthogonality;safety-critical",
    	doi = "10.1109/ISSRE.2014.25",
    	issn = "1071-9458"
    }
    
  3. Nuno Silva and Marco Vieira.
    Towards Making Safety-Critical Systems Safer: Learning from Mistakes.
    In Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on. November 2014, 162-167.
    Abstract Safety-critical systems usually need to be qualified and certified, they follow specific and strict development standards that recommend the use of techniques and processes, specific personnel training and domain expertise. These systems are very sensitive to failures and thus there is a need to guarantee the higher quality and dependability levels. The goal of this paper is to present the PhD work plan that shall lead to a disruptive approach to identify the quality gaps, root-causes and improve safety-critical systems engineering. The main idea is to start from the classification of real issues, map them to engineering properties and root causes, and identify how to avoid and reduce the impact of those causes. The foreseen improvements shall be reflected in development and V&V techniques, resources training or preparation, and international standards adaptations in order to reflect measurable improvement in the safety and quality of the systems. DOI BibTeX

    @inproceedings{cecris-2014-07,
    	author = "Silva, Nuno and Vieira, Marco",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on",
    	title = "Towards Making Safety-Critical Systems Safer: Learning from Mistakes",
    	year = 2014,
    	month = "Nov",
    	pages = "162-167",
    	abstract = "Safety-critical systems usually need to be qualified and certified, they follow specific and strict development standards that recommend the use of techniques and processes, specific personnel training and domain expertise. These systems are very sensitive to failures and thus there is a need to guarantee the higher quality and dependability levels. The goal of this paper is to present the PhD work plan that shall lead to a disruptive approach to identify the quality gaps, root-causes and improve safety-critical systems engineering. The main idea is to start from the classification of real issues, map them to engineering properties and root causes, and identify how to avoid and reduce the impact of those causes. The foreseen improvements shall be reflected in development and V&V techniques, resources training or preparation, and international standards adaptations in order to reflect measurable improvement in the safety and quality of the systems.",
    	keywords = "Guidelines;Industries;Safety;Software;Standards;Systems engineering and theory;Taxonomy;ODC;airborne;classification;defect;issue;orthogonality;root-cause analysis;safety-critical;space",
    	doi = "10.1109/ISSREW.2014.97"
    }
    
  4. Gerencser Varro G D G. Bergmann A. Hegedus.
    Graph Query by Example.
    In 1st International Workshop on Combining Modelling with Search- and Example-Based Approaches (CMSEBA 2014) Valencia, Spain September 28,. 2014. BibTeX

    @incollection{cecris-2014-06,
    	title = "Graph Query by Example",
    	author = "G. Bergmann, A. Hegedus, G. Gerencser, D. Varro",
    	booktitle = "1st International Workshop on Combining Modelling with Search- and Example-Based Approaches (CMSEBA 2014) Valencia, Spain	September 28,",
    	year = 2014,
    	abstract = "Abstract. Model-driven tools use model queries for many purposes, including validation of well-formedness rules, specification of derived features, and directing rule-based model transformation. Query languages such as graph patterns may facilitate capturing complex structural relationships between model elements. Specifying such queries, however, may prove difficult for engineers familiar with the concrete syntax only, not with the underlying abstract representation of the modeling language. The current paper presents an extension to the EMF-IncQuery model query tool that lets users point out, using familiar concrete syntax, an example of what the query results should look like, and automatically derive a graph query that finds other similar results.",
    	keywords = "by example, model query, graph pattern, EMF-IncQuery"
    }
    
  5. Tamas Toth and Andras Voros.
    Verification of a Real-Time Safety-Critical Protocol Using a Modelling Language with Formal Data and Behaviour Semantics.
    In Andrea Bondavalli, Andrea Ceccarelli and Frank Ortmeier (eds.). Computer Safety, Reliability, and Security. Lecture Notes in Computer Science series, volume 8696, Springer International Publishing, 2014, pages 207-218. URL, DOI BibTeX

    @incollection{cecris-2014-05,
    	year = 2014,
    	isbn = "978-3-319-10556-7",
    	booktitle = "Computer Safety, Reliability, and Security",
    	volume = 8696,
    	series = "Lecture Notes in Computer Science",
    	editor = "Bondavalli, Andrea and Ceccarelli, Andrea and Ortmeier, Frank",
    	doi = "10.1007/978-3-319-10557-4_24",
    	title = "Verification of a Real-Time Safety-Critical Protocol Using a Modelling Language with Formal Data and Behaviour Semantics",
    	url = "http://dx.doi.org/10.1007/978-3-319-10557-4_24",
    	publisher = "Springer International Publishing",
    	author = "Toth, Tamas and Voros, Andras",
    	pages = "207-218",
    	abstract = "Formal methods have an important role in ensuring the correctness of safety critical systems. However, their application in industry is always cumbersome: the lack of experts and the complexity of formal languages prevents the efficient application of formal verification techniques. In this paper we take a step in the direction of making formal modelling simpler by introducing a framework which helps designers to construct formal models efficiently. Our formal modelling framework supports the development of traditional transition systems enriched with complex data types with type checking and type inference services, time dependent behaviour and timing parameters with relations. In addition, we introduce a toolchain to provide formal verification. Finally, we demonstrate the usefulness of our approach in an industrial case study.",
    	language = "English"
    }
    
  6. Nicola Nostro, Andrea Ceccarelli, Francesco Brancati and Andrea Bondavalli.
    Insider Threat Assessment: a Model-Based Methodology.
    SIGOPS Operating Systems Review (OSR) journal 48(2):3–12, July 2014. URL, DOI BibTeX

    @article{cecris-2014-04,
    	author = "Nostro, Nicola and Ceccarelli, Andrea and Brancati, Francesco and Bondavalli, Andrea",
    	doi = "10.1145/2694737.2694740",
    	issn = "0163-5980",
    	journal = "SIGOPS Operating Systems Review (OSR) journal",
    	keywords = "security,insider threats,risk assessment,attack path",
    	month = "July",
    	number = 2,
    	pages = "3--12",
    	title = "{I}nsider {T}hreat {A}ssessment: a {M}odel-{B}ased {M}ethodology",
    	url = "http://dl.acm.org/citation.cfm?id=2694740",
    	volume = 48,
    	abstract = "Security is a major challenge for today's companies, especially ICT ones which manage large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insider attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are company's employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions. It is a consequence that insider attackers constitute an actual threat for ICT organizations. In this paper we present our methodology, together with the application of existing supporting libraries and tools from the state-of-the-art, for insider threats assessment and mitigation. The ultimate objective is to define the motivations and the target of an insider, investigate the likeliness and severity of potential violations, and finally identify appropriate countermeasures. The methodology also includes a maintenance phase during which the assessment can be updated to reflect system changes. As case study, we apply our methodology to the crisis management system Secure!, which includes different kinds of users and consequently is potentially exposed to a large set of insider threats.",
    	year = 2014
    }
    
  7. I A Elia, N Laranjeiro and M Vieira.
    ITWS: An Extensible Tool for Interoperability Testing of Web Services.
    In Web Services (ICWS), 2014 IEEE International Conference on. June 2014, 409-416.
    Abstract Web services are supported by a set of protocols that have been designed with the main goal of providing interoperable communication to applications. In typical business-critical services environments the occurrence of interoperability issues can have disastrous consequences, including direct financial costs, reputation, and client fidelity losses. Despite this, experience suggests that interoperability is still quite difficult to achieve, since the heterogeneity of frameworks for providing web services is quite large. In addition, current tools have limited testing capabilities and, in many cases do not specialize in this problem. In this paper we present ITWS, an extensible Interoperability Testing tool for Web Services that is able to assess the interoperability of a web service, supported by any given framework. We have used ITWS to test the interoperability of a set of home-implemented TPC-App web services and a set of thousands of web services created in .NET C# against 11 client-side web service frameworks, including frameworks for mainstream programming languages. Numerous issues have been disclosed, showing the benefits of using ITWS and the importance of testing services for interoperability. DOI BibTeX

    @inproceedings{cecris-2014-02,
    	author = "Elia, I.A. and Laranjeiro, N. and Vieira, M.",
    	booktitle = "Web Services (ICWS), 2014 IEEE International Conference on",
    	title = "ITWS: An Extensible Tool for Interoperability Testing of Web Services",
    	year = 2014,
    	month = "June",
    	pages = "409-416",
    	abstract = "Web services are supported by a set of protocols that have been designed with the main goal of providing interoperable communication to applications. In typical business-critical services environments the occurrence of interoperability issues can have disastrous consequences, including direct financial costs, reputation, and client fidelity losses. Despite this, experience suggests that interoperability is still quite difficult to achieve, since the heterogeneity of frameworks for providing web services is quite large. In addition, current tools have limited testing capabilities and, in many cases do not specialize in this problem. In this paper we present ITWS, an extensible Interoperability Testing tool for Web Services that is able to assess the interoperability of a web service, supported by any given framework. We have used ITWS to test the interoperability of a set of home-implemented TPC-App web services and a set of thousands of web services created in .NET C# against 11 client-side web service frameworks, including frameworks for mainstream programming languages. Numerous issues have been disclosed, showing the benefits of using ITWS and the importance of testing services for interoperability.",
    	keywords = "Web services;open systems;program testing;programming languages;.NET;ITWS;home-implemented TPC-App;interoperability testing of-Web-services;mainstream programming languages;protocols;Interoperability;Java;Runtime;Servers;Testing;Web services;interoperability;testing;web service;web service framework",
    	doi = "10.1109/ICWS.2014.65"
    }
    
  8. I A Elia, N Laranjeiro and M Vieira.
    A Field Perspective on the Interoperability of Web Services.
    In Services Computing (SCC), 2014 IEEE International Conference on. June 2014, 75-82.
    Abstract In a typical web services environment, a web service framework supports the client and server interaction by, among other tasks, announcing the services interfaces and translating application-level service calls to SOAP messages. Although designed to support inter-operation, research and practice suggest that existing client-side and server-side frameworks, many times, cannot fully inter-operate. The problem is that, as web services are increasingly being deployed to support business-critical environments, interoperability issues may prevent or impact business transactions, potentially resulting in huge financial and reputation losses. In this paper we present an experimental evaluation of the interoperability of 1024 publicly available web services, against a set of diverse and well-known client-side web service frameworks. We have detected at least one severe interoperability issue in over 53% of the services tested and quite different inter-operation capabilities regarding the client-side frameworks. Results clearly show that, although providers frequently claim interoperability capabilities, urgent improvements are required. DOI BibTeX

    @inproceedings{cecris-2014-01,
    	author = "Elia, I.A. and Laranjeiro, N. and Vieira, M.",
    	booktitle = "Services Computing (SCC), 2014 IEEE International Conference on",
    	title = "A Field Perspective on the Interoperability of Web Services",
    	year = 2014,
    	month = "June",
    	pages = "75-82",
    	abstract = "In a typical web services environment, a web service framework supports the client and server interaction by, among other tasks, announcing the services interfaces and translating application-level service calls to SOAP messages. Although designed to support inter-operation, research and practice suggest that existing client-side and server-side frameworks, many times, cannot fully inter-operate. The problem is that, as web services are increasingly being deployed to support business-critical environments, interoperability issues may prevent or impact business transactions, potentially resulting in huge financial and reputation losses. In this paper we present an experimental evaluation of the interoperability of 1024 publicly available web services, against a set of diverse and well-known client-side web service frameworks. We have detected at least one severe interoperability issue in over 53% of the services tested and quite different inter-operation capabilities regarding the client-side frameworks. Results clearly show that, although providers frequently claim interoperability capabilities, urgent improvements are required.",
    	keywords = "Web services;business data processing;client-server systems;open systems;SOAP messages;Web service framework;Web service interoperability;business transactions;business-critical environments;client-server interaction;client-side frameworks;field perspective;financial losses;interoperability evaluation;reputation losses;server-side Web service frameworks;services interfaces;Business;Interoperability;Java;Servers;Simple object access protocol;Testing;interoperability;testing;web service;web service framework",
    	doi = "10.1109/SCC.2014.19"
    }
    
  9. Andrea Ceccarelli, Tommaso Zoppi, Andrea Bondavalli, Fabio Duchi and Giuseppe Vella.
    A Testbed for Evaluating Anomaly Detection Monitors Through Fault Injection.
    In ISORCW-SORT 2014. 2014.
    Abstract Amongst the features of Service Oriented Architectures (SOAs), their flexibility, dynamicity, and scalability make them particularly attractive for adoption in the ICT infrastructure of organizations. Such features come at the cost of improved difficulty in monitoring the SOA for error detection: i) faults may manifest themselves differently due to services and SOA evolution, and ii) interactions between a service and its monitors may need reconfiguration at each service update. This calls for monitoring solutions that operate at different layers than the application layer (services layer). In this paper we present our ongoing work towards the definition of a monitoring framework for SOAs and services, which relies on anomaly detection performed at the Application Server (AS) and the Operating System (OS) layers to identify events whose manifestation or effect is not adequately described a-priori. Specifically the paper introduces the key concepts of our work and presents the case study built to exercise and set-up our monitor. The case study uses Liferay as application layer and it includes fault injection and data collection instruments to perform extended testing campaigns" BibTeX

    @conference{cecris-2014-00,
    	author = "Ceccarelli, Andrea and Zoppi, Tommaso and Bondavalli, Andrea and Duchi, Fabio and Vella, Giuseppe",
    	booktitle = "ISORCW-SORT 2014",
    	keywords = "sort2014",
    	title = "{A} {T}estbed for {E}valuating {A}nomaly {D}etection {M}onitors {T}hrough {F}ault {I}njection",
    	year = 2014,
    	abstract = {Amongst the features of Service Oriented Architectures (SOAs), their flexibility, dynamicity, and scalability make them particularly attractive for adoption in the ICT infrastructure of organizations. Such features come at the cost of improved difficulty in monitoring the SOA for error detection: i) faults may manifest themselves differently due to services and SOA evolution, and ii) interactions between a service and its monitors may need reconfiguration at each service update. This calls for monitoring solutions that operate at different layers than the application layer (services layer). In this paper we present our ongoing work towards the definition of a monitoring framework for SOAs and services, which relies on anomaly detection performed at the Application Server (AS) and the Operating System (OS) layers to identify events whose manifestation or effect is not adequately described a-priori. Specifically the paper introduces the key concepts of our work and presents the case study built to exercise and set-up our monitor. The case study uses Liferay as application layer and it includes fault injection and data collection instruments to perform extended testing campaigns"}
    }
    
  1. D Cotroneo, F Frattini, R Natella and R Pietrantuono.
    Performance Degradation Analysis of a Supercomputer.
    In The 5th International Workshop on Software Aging and Rejuvenation Supplemental Proceedings of 2013 IEEE 24th International Symposium on Software Reliability Engineering. 2013, 263-268.
    Abstract We analyze performance degradation phenomena due to software aging on a real supercomputer deployed at the Federico II University of Naples, by considering a dataset of ten months of operational usage. We adopted a statistical approach for identifying when and where the supercomputer experienced a performance degradation trend. The analysis pinpointed performance degradation trends that were actually caused by the gradual error accumulation within basic software of the supercomputer. BibTeX

    @conference{cecris-2013-14,
    	author = "D. Cotroneo and F. Frattini and R. Natella and R. Pietrantuono",
    	title = "Performance Degradation Analysis of a Supercomputer",
    	booktitle = "The 5th International Workshop on Software Aging and Rejuvenation Supplemental Proceedings of 2013 IEEE 24th International Symposium on Software Reliability Engineering",
    	abstract = "We analyze performance degradation phenomena due to software aging on a real supercomputer deployed at the Federico II University of Naples, by considering a dataset of ten months of operational usage. We adopted a statistical approach for identifying when and where the supercomputer experienced a performance degradation trend. The analysis pinpointed performance degradation trends that were actually caused by the gradual error accumulation within basic software of the supercomputer.",
    	year = 2013,
    	pages = "263-268",
    	publisher = "IEEE Computer Society, Los Alamitos.",
    	address = "Pasadena, CA, USA",
    	month = "4th-7th November",
    	isbn = "978-1-4799-2552-0"
    }
    
  2. Nuno Silva and Marco Vieira.
    Certification of embedded systems: Quantitative analysis and irrefutable evidences.
    In Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on. November 2013, 15-16.
    Abstract Electronic/embedded systems are more and more dependent and relying on software flexibility and properties. They can be found in all spheres of our lives at a macro and global scale, ranging from personal and entertainment devices, household appliances, all types of transportation systems, global communication systems, civilian and military systems, energy and banking systems, and so on. Given the importance of all these systems and the safety and security requirements that become associated, national and international regulators require appropriate certification of each characteristic of the referred ubiquitous systems. This abstract presents the initial ideas concerning a quantitative analysis and evaluation of the evidence set forward in safety cases that support and eventually lead to certification of embedded systems with large parts of software. A discussion about the current industrial practices, limitations and state of the art related to certification evidences is drafted, and ideas concerning how can evidences be improved in terms of completeness, coherency, correctness, coverage, etc, as well as how can a quantitative analysis of the certification process be derived, are introduced for discussion and feedback. Current practices are not perfect, not properly applied, or applied in very different ways, presenting limitations, flaws and simplifications that put jeopardize systems safety, this is why we intend to initiate this research work. DOI BibTeX

    @inproceedings{cecris-2013-13,
    	author = "Silva, Nuno and Vieira, Marco",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on",
    	title = "Certification of embedded systems: Quantitative analysis and irrefutable evidences",
    	year = 2013,
    	month = "Nov",
    	pages = "15-16",
    	abstract = "Electronic/embedded systems are more and more dependent and relying on software flexibility and properties. They can be found in all spheres of our lives at a macro and global scale, ranging from personal and entertainment devices, household appliances, all types of transportation systems, global communication systems, civilian and military systems, energy and banking systems, and so on. Given the importance of all these systems and the safety and security requirements that become associated, national and international regulators require appropriate certification of each characteristic of the referred ubiquitous systems. This abstract presents the initial ideas concerning a quantitative analysis and evaluation of the evidence set forward in safety cases that support and eventually lead to certification of embedded systems with large parts of software. A discussion about the current industrial practices, limitations and state of the art related to certification evidences is drafted, and ideas concerning how can evidences be improved in terms of completeness, coherency, correctness, coverage, etc, as well as how can a quantitative analysis of the certification process be derived, are introduced for discussion and feedback. Current practices are not perfect, not properly applied, or applied in very different ways, presenting limitations, flaws and simplifications that put jeopardize systems safety, this is why we intend to initiate this research work.",
    	keywords = "certification;embedded systems;safety-critical software;banking systems;civilian systems;electronic systems;embedded systems certification;energy systems;entertainment devices;global communication systems;household appliances;military systems;personal devices;safety requirements;security requirements;software flexibility;software properties;transportation systems;ubiquitous systems;Certification;Evidences;Safety Case;Safety standards;Software Safety",
    	doi = "10.1109/ISSREW.2013.6688854"
    }
    
  3. N Antunes, F Brancati, A Ceccarelli, A Bondavalli and M Vieira.
    A monitoring and testing framework for critical off-the-shelf applications and services.
    In Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on. November 2013, 371-374.
    Abstract One of the biggest verification and validation challenges is the definition of approaches and tools to support system assessment while minimizing costs and delivery time. This includes the integration of OTS software components in critical systems that must undergo proper certification or approval processes. In the particular case of testing, due to the differences and peculiarities of components, developers often build ad-hoc and poorly-reusable testing tools, which results in increased time and costs. This paper introduces a framework for testing and monitoring of critical OTS applications and services. The framework includes i) a box that is instrumented for monitoring OS and application level variables, ii) an adaptable toolset for testing the target components, and iii) tools for data storing, retrieval and analyzes. A prototype of the framework is under development, and future testing scenarios are designed to show the applicability and effectiveness of the framework. DOI BibTeX

    @conference{cecris-2013-12,
    	author = "Antunes, N. and Brancati, F. and Ceccarelli, A and Bondavalli, A and Vieira, M.",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on",
    	title = "A monitoring and testing framework for critical off-the-shelf applications and services",
    	year = 2013,
    	month = "Nov",
    	pages = "371-374",
    	abstract = "One of the biggest verification and validation challenges is the definition of approaches and tools to support system assessment while minimizing costs and delivery time. This includes the integration of OTS software components in critical systems that must undergo proper certification or approval processes. In the particular case of testing, due to the differences and peculiarities of components, developers often build ad-hoc and poorly-reusable testing tools, which results in increased time and costs. This paper introduces a framework for testing and monitoring of critical OTS applications and services. The framework includes i) a box that is instrumented for monitoring OS and application level variables, ii) an adaptable toolset for testing the target components, and iii) tools for data storing, retrieval and analyzes. A prototype of the framework is under development, and future testing scenarios are designed to show the applicability and effectiveness of the framework.",
    	keywords = "formal verification;program testing;OS variables;OTS software components;ad-hoc poorly-reusable testing tools;application level variables;costs minimization;data analysis;data retrieval;data storing;delivery time minimization;monitoring framework;off-the-shelf applications;off-the-shelf services;operating systems;testing framework;verification and validation challenges;Certification;Logic gates;Monitoring;Robustness;Software;Standards;Testing;assessment;certification;critical applications;monitoring;testing;verification and validation",
    	doi = "10.1109/ISSREW.2013.6688923"
    }
    
  4. A Ceccarelli and N Silva.
    Qualitative comparison of aerospace standards: An objective approach.
    In Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on. November 2013, 331-336.
    Abstract Aerospace development processes are regulated by hardware, software or system-level standards. These standards describe the phases of the life-cycle, and the techniques that guarantee or assess the safety of systems and components. Standards are mostly written independently one from the others, and despite major similarities, they also include several distinctions which force companies to apply different expertise, training, personnel and procedures for each of them. This increases the difficulty in adopting new or different standards, ultimately resulting in increased costs. This paper investigates the differences between relevant aerospace standards, namely, the standards investigated include ARP4754A/4761, DO-178B/C, DO-254, ED-153, FAA HBK006A, Galileo Software Standard (GSWS) and the ECSS series, through comparison of lifecycle and major requirements. Evidence is given of main commonalities between the standards, but also of several, non-negligible specificities, what make it more challenging to define a unique development process, and set of activities and competences required to achieve the standards compliance. DOI BibTeX

    @conference{cecris-2013-11,
    	author = "Ceccarelli, A and Silva, N.",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on",
    	title = "Qualitative comparison of aerospace standards: An objective approach",
    	year = 2013,
    	month = "Nov",
    	pages = "331-336",
    	abstract = "Aerospace development processes are regulated by hardware, software or system-level standards. These standards describe the phases of the life-cycle, and the techniques that guarantee or assess the safety of systems and components. Standards are mostly written independently one from the others, and despite major similarities, they also include several distinctions which force companies to apply different expertise, training, personnel and procedures for each of them. This increases the difficulty in adopting new or different standards, ultimately resulting in increased costs. This paper investigates the differences between relevant aerospace standards, namely, the standards investigated include ARP4754A/4761, DO-178B/C, DO-254, ED-153, FAA HBK006A, Galileo Software Standard (GSWS) and the ECSS series, through comparison of lifecycle and major requirements. Evidence is given of main commonalities between the standards, but also of several, non-negligible specificities, what make it more challenging to define a unique development process, and set of activities and competences required to achieve the standards compliance.",
    	keywords = "aerospace computing;software standards;ARP4754A/4761 standard;DO-178B/C standard;DO-254 standard;ECSS series;ED-153 standard;FAA HBK006A standard;GSWS;Galileo software standard;aerospace development processes;aerospace standards;component safety;life-cycle;objective approach;personnel;qualitative comparison;system safety;system-level standards;training;Aerospace electronics;FAA;Maintenance engineering;Safety;Software;Standards;Testing;ARP4754A;DO-178B;DO-178C;DO-254;ECSS;ED-153;FAA HBK006A;GSWS;aerospace;standards comparison",
    	doi = "10.1109/ISSREW.2013.6688916"
    }
    
  5. N Silva, A Esper, R Barbosa, J Zandin and C Monteleone.
    Reference architecture for high dependability on-board computers.
    In Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on. November 2013, 381-386.
    Abstract The industrial process in the area of on-board computers is characterized by small production series of onboard computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of onboard computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance. DOI BibTeX

    @inproceedings{cecris-2013-10,
    	author = "Silva, N. and Esper, A. and Barbosa, R. and Zandin, J. and Monteleone, C.",
    	booktitle = "Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on",
    	title = "Reference architecture for high dependability on-board computers",
    	year = 2013,
    	month = "Nov",
    	pages = "381-386",
    	abstract = "The industrial process in the area of on-board computers is characterized by small production series of onboard computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of onboard computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.",
    	keywords = "avionics;embedded systems;software reliability;ESA harmonization policy;REFARCH;avionics embedded systems;complete lifecycle;high dependability on-board computers;on-board computer dependability assurance;reference architecture;space industry;statistical data;Aerospace electronics;Availability;Computers;Hardware;Software;Software reliability;SAVOIR;assurance;availability;dependability;maintainability;on-board computer;prediction;reliability",
    	doi = "10.1109/ISSREW.2013.6688925"
    }
    
  6. N Nostro, A Ceccarelli, A Bondavalli and F Brancati.
    A Methodology and Supporting Techniques for the Quantitative Assessment of Insider Threats.
    In Proceedings of the 2nd International Workshop on Dependability Issues in Cloud Computing (DISCCO 2013). 2013, 263-268.
    Abstract Security is a major challenge for today's companies, especially ICT ones which manages large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insiders attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are company's employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions: it is a consequence that insiders attackers constitute an actual threat for ICT organizations. In this paper we present our ongoing work towards a methodology and supporting libraries and tools for insider threats assessment and mitigation. The ultimate objective is to quantitatively evaluate the possibility that a user will perform an attack, the severity of potential violations, the costs, and finally select the countermeasures. The methodology also includes a maintenance phase during which the assessment is updated on the basis of system evolution. The paper discusses future works towards the completion of our methodology. DOI BibTeX

    @conference{cecris-2013-09,
    	author = "N. Nostro and A. Ceccarelli and A. Bondavalli and F. Brancati",
    	title = "A Methodology and Supporting Techniques for the Quantitative Assessment of Insider Threats",
    	booktitle = "Proceedings of the 2nd International Workshop on Dependability Issues in Cloud Computing (DISCCO 2013)",
    	abstract = "Security is a major challenge for today's companies, especially ICT ones which manages large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insiders attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are company's employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions: it is a consequence that insiders attackers constitute an actual threat for ICT organizations. In this paper we present our ongoing work towards a methodology and supporting libraries and tools for insider threats assessment and mitigation. The ultimate objective is to quantitatively evaluate the possibility that a user will perform an attack, the severity of potential violations, the costs, and finally select the countermeasures. The methodology also includes a maintenance phase during which the assessment is updated on the basis of system evolution. The paper discusses future works towards the completion of our methodology.",
    	year = 2013,
    	pages = "263-268",
    	publisher = "ACM",
    	address = "Braga, Portugal",
    	month = "30th September",
    	isbn = "978-1-4503-2248-5",
    	doi = "10.1145/2506155.2506158"
    }
    
  7. N Silva and R Lopes.
    RELIABILITY PREDICTION ANALYSIS: AIRBORNE SYSTEM RESULTS AND BEST PRACTICES.
    In IASS 2013. 2013. BibTeX

    @conference{cecris-2013-08,
    	author = "N. Silva and R. Lopes",
    	title = "RELIABILITY PREDICTION ANALYSIS: AIRBORNE SYSTEM RESULTS AND BEST PRACTICES",
    	abstract = "",
    	booktitle = "IASS 2013",
    	year = 2013,
    	address = "Montreal, Canada",
    	month = "21-13 May"
    }
    
  8. D Cotroneo and R Natella.
    Fault Injection for Software Certification.
    In IEEE Security & Privacy, special issue on Safety-Critical Systems: The Next Generation. 38-45, vol 11 (4).
    Abstract As software becomes more pervasive and complex, it's increasingly important to ensure that a system will be safe even in the presence of residual software faults (or bugs). Software fault injection consists of the deliberate introduction of software faults for assessing the impact of faulty software on a system and improving its fault tolerance. SFI has been included as a recommended practice in recent safety standards and has therefore gained interest among practitioners, but it's still unclear how it can be effectively used for certification purposes. In this article, the authors discuss the adoption of SFI in the context of safety certification, present a tool for the injection of realistic software faults, and show the usage of that tool in evaluating and improving the robustness of an operating system used in the avionic domain. DOI BibTeX

    @conference{cecris-2013-07,
    	author = "D. Cotroneo and R. Natella",
    	title = "Fault Injection for Software Certification",
    	booktitle = "IEEE Security \& Privacy, special issue on Safety-Critical Systems: The Next Generation",
    	abstract = "As software becomes more pervasive and complex, it's increasingly important to ensure that a system will be safe even in the presence of residual software faults (or bugs). Software fault injection consists of the deliberate introduction of software faults for assessing the impact of faulty software on a system and improving its fault tolerance. SFI has been included as a recommended practice in recent safety standards and has therefore gained interest among practitioners, but it's still unclear how it can be effectively used for certification purposes. In this article, the authors discuss the adoption of SFI in the context of safety certification, present a tool for the injection of realistic software faults, and show the usage of that tool in evaluating and improving the robustness of an operating system used in the avionic domain.",
    	pages = "38-45, vol 11 (4)",
    	publisher = "IEEE Computer Society",
    	issn = "1540-7993",
    	doi = "10.1109/MSP.2013.54"
    }
    
  9. N Silva and R Lopes.
    Practical Experiences with real-world systems: Security in the World of Reliable and Safe Systems.
    In Conference on Dependable Systems and Networks Workshop (DSN-W), 2013 43rd Annual IEEE/IFIP. 2013, 1-5.
    Abstract Reliability and Safety have always been associated to Safety Critical Systems. Since the failure of a Safety Critical System may lead to loss of human lives or large economical effects, the standards that guide the development of these systems have always focused in these two aspects, independently of the domain applicable. By looking into Reliability and Safety independently and focused, one can design a system highly reliable and safe without Security concerns. However, Security plays a major role in the achievement of both Reliability and Safety. A system cannot be reliable and safe if it is not secure. Therefore, the current processes to certify a Safety Critical System also address Security aspects, together with Reliability and Safety. This work presents the activities that have been performed in the scope of the certification of a Safety Critical System in the railway domain and how Security is tackled without jeopardizing Reliability and Safety. The data collected and its importance for guaranteeing safety, reliability and security is presented and discussed. A relationship between the activities performed and the standards concerns is established and examples of architecture decisions that could provide more Reliability and Safety but less Security will be presented. DOI BibTeX

    @conference{cecris-2013-06,
    	author = "N. Silva and R. Lopes",
    	title = "Practical Experiences with real-world systems: Security in the World of Reliable and Safe Systems",
    	booktitle = "Conference on Dependable Systems and Networks Workshop (DSN-W), 2013 43rd Annual IEEE/IFIP",
    	abstract = "Reliability and Safety have always been associated to Safety Critical Systems. Since the failure of a Safety Critical System may lead to loss of human lives or large economical effects, the standards that guide the development of these systems have always focused in these two aspects, independently of the domain applicable. By looking into Reliability and Safety independently and focused, one can design a system highly reliable and safe without Security concerns. However, Security plays a major role in the achievement of both Reliability and Safety. A system cannot be reliable and safe if it is not secure. Therefore, the current processes to certify a Safety Critical System also address Security aspects, together with Reliability and Safety. This work presents the activities that have been performed in the scope of the certification of a Safety Critical System in the railway domain and how Security is tackled without jeopardizing Reliability and Safety. The data collected and its importance for guaranteeing safety, reliability and security is presented and discussed. A relationship between the activities performed and the standards concerns is established and examples of architecture decisions that could provide more Reliability and Safety but less Security will be presented.",
    	year = 2013,
    	pages = "1-5",
    	publisher = "IEEE Computer Society",
    	address = "Budapest, Hungary",
    	month = "24th-27th June",
    	issn = "2325-6648",
    	doi = "10.1109/DSNW.2013.6615515"
    }
    
  10. C Areias.
    A Framework for Runtime V&V in Business-Critical Service Oriented Architectures.
    In Student Forum, The 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2013). 2013, 1-4.
    Abstract Service Oriented Architectures (SOA) present features that allow companies to react quickly to changes through the provision of new or modified services in their environments. However, the application of verification and validation (V&V) techniques in such environments is a very challenging task, due to their complexity and dynamic nature, hampering the application of traditional V&V. In this context, this paper justifies the needs for a runtime V&V approach, presents a framework for its implementation in critical SOA systems, and discusses some important challenges that must be addressed. DOI BibTeX

    @conference{cecris-2013-05,
    	author = "C. Areias",
    	title = "A Framework for Runtime V\&V in Business-Critical Service Oriented Architectures",
    	booktitle = "Student Forum, The 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2013)",
    	abstract = "Service Oriented Architectures (SOA) present features that allow companies to react quickly to changes through the provision of new or modified services in their environments. However, the application of verification and validation (V\&V) techniques in such environments is a very challenging task, due to their complexity and dynamic nature, hampering the application of traditional V\&V. In this context, this paper justifies the needs for a runtime V\&V approach, presents a framework for its implementation in critical SOA systems, and discusses some important challenges that must be addressed.",
    	year = 2013,
    	address = "Budapest, Hungary",
    	month = "24th-27th June",
    	pages = "1-4",
    	publisher = "IEEE Computer Society",
    	issn = "2325-6648",
    	doi = "10.1109/DSNW.2013.6615518"
    }
    
  11. N Silva, R Barbosa, J C Cunha and M Vieira.
    A View on the Past and Future of Fault Injection.
    In Supplemental Proceedings of The 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2013). 2013, 1-2.
    Abstract Fault injection is a well-known technology that enables assessing dependability attributes of computer systems. Many works on fault injection have been developed in the past, and fault injection has been used in different application domains. This fast abstract briefly revises previous applications of fault injection, especially for embedded systems, and puts forward ideas on its future use, both in terms of application areas and business markets. DOI BibTeX

    @conference{cecris-2013-04,
    	author = "N. Silva and R. Barbosa and J. C. Cunha and M. Vieira",
    	title = "A View on the Past and Future of Fault Injection",
    	booktitle = "Supplemental Proceedings of The 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2013)",
    	abstract = "Fault injection is a well-known technology that enables assessing dependability attributes of computer systems. Many works on fault injection have been developed in the past, and fault injection has been used in different application domains. This fast abstract briefly revises previous applications of fault injection, especially for embedded systems, and puts forward ideas on its future use, both in terms of application areas and business markets.",
    	year = 2013,
    	pages = "1-2",
    	publisher = "IEEE Computer Society",
    	address = "Budapest, Hungary",
    	month = "24th-27th June",
    	isbn = "978-1-4673-6471-3",
    	doi = "10.1109/DSN.2013.6575332"
    }
    
  12. R Barbosa, N Silva and J M Cunha.
    csXception - First Steps for Providing Fault Injection for the Development of Safe Systems in the Automotive Industry.
    In Proceedings of 14th European Workshop, EWDC 2013. 2013, 202-205.
    Abstract The increasing complexity on the vehicles electrical and/or electronic components has introduced a challenge to automotive safety. Standardization efforts have already been made, leading to the ISO-26262 functional safety and the AUTOSAR architecture definition, providing a development process that addresses safety and quality issues. With the goal of ensuring safety properties, this paper presents a fault injection tool (csXception), developed by Critical Software, and the first steps towards injecting faults on ARM Cortex-M3 microcontroller using the SCIFI technique for assessing AUTOSAR systems DOI BibTeX

    @conference{cecris-2013-03,
    	author = "R. Barbosa and N. Silva and J. M. Cunha",
    	title = "csXception - First Steps for Providing Fault Injection for the Development of Safe Systems in the Automotive Industry",
    	booktitle = "Proceedings of 14th European Workshop, EWDC 2013",
    	abstract = "The increasing complexity on the vehicles electrical and/or electronic components has introduced a challenge to automotive safety. Standardization efforts have already been made, leading to the ISO-26262 functional safety and the AUTOSAR architecture definition, providing a development process that addresses safety and quality issues. With the goal of ensuring safety properties, this paper presents a fault injection tool (csXception), developed by Critical Software, and the first steps towards injecting faults on ARM Cortex-M3 microcontroller using the SCIFI technique for assessing AUTOSAR systems",
    	year = 2013,
    	pages = "202-205",
    	publisher = "Springer Berlin Heidelberg",
    	address = "Coimbra, Portugal",
    	month = "15-16 May",
    	isbn = "978-3-642-38788-3",
    	doi = "10.1007/978-3-642-38789-0_21"
    }
    
  13. N Silva, R Lopes, A Esper and R Barbosa.
    Results from an independent view on the validation of safety-critical space systems.
    In DASIA 2013. 2013, 202-205. BibTeX

    @conference{cecris-2013-02,
    	author = "N. Silva and R. Lopes and A. Esper and R. Barbosa",
    	title = "Results from an independent view on the validation of safety-critical space systems",
    	booktitle = "DASIA 2013",
    	abstract = "",
    	year = 2013,
    	pages = "202-205",
    	address = "Porto, Portugal",
    	month = "14-16 May"
    }
    
  14. C Areias, N Antunes, J C Cunha and M Vieira.
    Towards Runtime V&V for Service Oriented Architectures.
    In Fast Abstract, Sixth Latin-American Symposium on Dependable Computing (LADC 2013). 2013, 75-76.
    Abstract The widespread use of SOAs and their specific characteristics raise new challenges for V&V practices. This paper presents some of these challenges and introduces Runtime V&V as a possible future solution BibTeX

    @conference{cecris-2013-01,
    	author = "C. Areias and N. Antunes and J. C. Cunha and M. Vieira",
    	title = "Towards Runtime V\&V for Service Oriented Architectures",
    	booktitle = "Fast Abstract, Sixth Latin-American Symposium on Dependable Computing (LADC 2013)",
    	abstract = "The widespread use of SOAs and their specific characteristics raise new challenges for V\&V practices. This paper presents some of these challenges and introduces Runtime V\&V as a possible future solution",
    	year = 2013,
    	address = "Rio de Janeiro, Brazil",
    	pages = "75-76",
    	isbn = "978-85-7669-274-4",
    	month = "1st-5th April"
    }
    

Joomla templates by a4joomla

Please be aware that this website uses cookies.