Wednesday 1 June
We.1.A.111:30add We.1.A.1 to agenda
Real-time high performance computing platform using a Jetson Xavier AGX
While general purpose graphics processing units now embark tremendous amount of computing power, their use in real time applications is still a challenge. The COSMIC platform, developed in the context of adaptive optics control for giant astronomical telescopes is a demonstrated solution to perform real time computations using discrete GPUs while maintaining a high level of abstraction and modularity. An implementation on embedded platforms with the goal to reach an acceptable level of time-determinism would enable new real-time use cases in other application domains. In this regard, NVIDIA is offering a broad range of embedded systems on chip delivering great performance and compatible with the CUDA ecosystem. However, specific hardware and software features bring uncertainties regarding real-time performance. The approaches presented in this paper rely on COSMIC recipes to expose part of the underlying unconventional GPU programming model to reach real-time performance. It shows how Jetson Xavier performs on sub-millisecond complex pipelines made of several compute kernels, considering the limitations engendered by missing CUDA features as compared to discrete devices, leveraging unified memory to work around these hurdles and enabling several strategies for implementing real-time workflows on embedded GPU platforms.
We.1.A.212:00add We.1.A.2 to agenda
PasTiS: building an NVIDIA Pascal GPU simulator for embedded AI applications
In this paper, we present how we have proceeded to understand the behavior of a NVIDIA Pascal GPU and how we have used this knowledge to develop a cycle-level simulator called PasTiS (Pascal Timing Simulator). We present PasTiS, a simulator for the NVIDIA Pascal GPU architecture family, with a focus on timing simulation. PasTiS supports a subset of the Pascal ISA, sufficient to simulate the execution of neural networks. We present this subset, as well as the underlying microarchitecture that we modelled using information available from NVIDIA, from scientific publications, and from our own experiments. We demonstrate the precision of the simulator by comparing it to measurements on the NVIDIA Jetson TX2 development board, on neural network applications.
We.1.B.111:30add We.1.B.1 to agenda
Sizing a Drone Battery by coupling MBSE and MDAO
Drones have increasingly been used to assist Humans for rescue and surveillance missions. To do so, their design turns out to be a challenging issue, in particular when their autonomy is expected by sizing of batteries. Thus, solutions are to be sought to engineer a drone rigorously while specifying its main features. This paper introduces a work in progress aiming to bridge the gap between two engineering disciplines that have so far been developed separately: Model Based Systems Engineering (MBSE) and Multidisciplinary Design Analysis and Optimization (MDAO). Coupling of MBSE and MDAO is addressed in terms of language, tools, and methods.
We.1.B.212:00add We.1.B.2 to agenda
Adaptation of an auto-generated code using a model-based approach to verify functional safety in real scenarios
Almost every mode of transportation is becoming autonomous. Automotive industries are investing a lot in transportation technologies in deploying self-driving systems. One difficult hurdle in the automotive domain is the fact that system and software safety is not guaranteed. The public needs to be involved in safety decisions about adopting them to avoid their rejection. We propose a Model-Based System Engineering (MBSE) approach to verify online the safety of Autonomous Vehicles (AVs). It is a simulation environment that creates a resilient and safe driver monitoring system. It helps to continuously check safety rules defined previously and triggers alarms when violations occur. It also detects inconsistencies between the rules.
We.1.C.111:30add We.1.C.1 to agenda
Formal Hardware Modeling for Analyzing Safety and Security Properties
The design of complex computational systems, such as Cyber-Physical Systems(CPSs) or Internet of Things (IoTs), is facilitated by the emergence of open hard-ware initiatives. Such initiatives propose software-like development workflows, from complex high-level Hardware Design Languages (HDLs)  down to circuits, while using sophisticated compilation chains. These approaches thus favor the availability of hardware designs, replacing the standard manual reference, where only certain design details are provided. Besides, they enable a manipulation of the design as designers can insert their own transformation passes in those flows. In this paper, we study how to design formal architecture models so as to prove safety and security properties of complex computational system
We.1.C.212:00add We.1.C.2 to agenda
An Automated Framework Towards Widespread Formal Verification of Complex Hardware Designs
Verification is an essential step of critical systems design flow with regard to safety and security. It supports respectively fault and vulnerability removal. Model-checking ensures that a design meets its specifications by using an exhaustive state exploration approach. It has been adopted to design critical and/or secure by design embedded hardware systems. On the one hand, model-checking is fully automated; on the other hand, it does not scale and faces state-space explosion when working with large industrial circuits and complex specifications. Compositional model-checking and theorem proving enable to verify large designs at the cost of finding abstractions and proving an implication of the specifications. In this paper, we present a method along with a framework to reduce this cost and improve hardware verification performances. We formally verified a hardware security monitor involved in a remote attestation scheme for microprocessors. This verification could not be achieved using classical approaches as it faces state-space explosion: the monitor is complex enough to be unfit for model-checking. Then, we applied our method to the verification and successfully proved the formal definition of remote attestation security using symbolic model-checking and automatic theorem proving. Our verified security monitor is described with hardware description language and its specifications are written in property specification language. Our method makes extensive use of compositional model-checking techniques to leverage the modular and partitioned aspects of most automata involved in hardware modelling. This method is fully automated in a framework build on top of free model checkers and automatic theorem provers. Automation relies on synthesis and translation tools which exploit the modular structure of sequential circuits and avoid having to re-design.
We.2.PO.114:00add We.2.PO.1 to agenda
Structural consistency of MBSE and MBSA models using Consistency Links
The systems designed in industrial fields such as aeronautics or aerospace are more and more complex. In order to handle this complexity as well as the increasing need of digital continuity, model-based solutions are more and more introduced for system design (MBSE). In this picture, in order to ensure the consistency between the system design and the safety analysis, and thus increase the confidence in safety analyses results, our proposal is to ensure the consistency between MBSE and MBSA (Model Based Safety Analysis), which represent different views of the same system. To do so, we define Consistency Links (CL) that make a bridge between the structural items of each model. Associated with dedicated, that can be systematically checked, the CL can be used to drive the cross-review of models done by system engineers (SE) and safety specialists (SA) to increases detection of inconsistencies between models. The work presented here is part of the S2C project and involves industrial partners from the space and the aeronautical industry. It is led jointly by IRT Saint Exupéry and IRT SystemX.
We.2.PO.214:05add We.2.PO.2 to agenda
Experimenting with Dynamic Cache Allocation to Improve Linux Real-Time Behaviour
Embedded systems have an increasing need for computing power. To address this issue, we can implement critical and non-critical tasks in the same multicore processor. A disadvantage of this kind of processor is the indeterminism it involves due to its complexity. New technologies, like dynamic allocation of cache memory, allow reducing the impact of this indeterminism. In this article, we provide an experimental approach to verify if the dynamic allocation of the cache memory of Intel (CAT Intel) is efficient.
We.2.PO.314:10add We.2.PO.3 to agenda
Model-based design of high-performance computer-based architectures
This paper focuses on a domain-specific architecture description language (ADL), that can be utilized in a model-based tool to design automotive Electric/Electronic-architectures. This ADL should be able to describe relevant aspects of modern E/E-architectures holistically with sufficient precision, clarity, and visual expressiveness to be understood by professionals of different disciplines in the automotive industry. Due to the pursued completeness of models, it should enable transformations into various standardized formats and allow metric calculation for optimization purposes. To cover innovations related to centralized computers in the vehicle, both regarding to-pology and implementation of service-oriented architectures (SOA), the problem of the design and deployment of applications in heterogeneous E/E-architectures on the system level is addressed. For this purpose, the existing domain-specific language Electric/Electronic-Architecture-ADL (EEA-ADL), used in the tool PREEvision, is extended. Furthermore, an integrated workflow using EEA-ADL is proposed, which could help establishing a comprehensive model of various deployment scenarios.
We.2.PO.414:15add We.2.PO.4 to agenda
Towards Model-Based Support for STPA as a Capella Add-On
STPA (System-Theoretic Process Analysis) is a risk analysis method which is gaining momentum. It allows identifying design problems or hidden assumptions that may lead to hazards through inappropriate interactions even in the absence of breakdowns and cascading failures. It is a possible response, when Safety is at stake, to the evolution of the nature and the complexity of systems: the emergence of Systems of Systems, the increase of autonomy, the pervasive presence of software and connectivity. Meanwhile, when it comes to designing complex systems, Model-Based Systems Engineering (MBSE) provides well-established engineering practices, technologies and tools that help deal with complexity. STPA being a method, it can greatly benefit from dedicated tool support; given its scope, it is of particular interest that this support integrates with an MBSE environment. We propose tool support to STPA in the form of an add-on to the open source Capella MBSE tool. We present the rationale we followed to design and develop the tool, its main capabilities, the experiments we carried out on a case study, and methodological questions that remain open regarding the integration of STPA with MBSE.
We.2.PO.514:20add We.2.PO.5 to agenda
PLATO N-DPU ON-BOARD SOFTWARE: AN IDEAL CANDIDATE FOR MULTICORE SCHEDULING ANALYSIS
Each day, space missions require ever more computing power. PLATO mission, parts of ESA’s Cosmic Vision program, is no exception. To answer the demand, multiprocessors chips are now used in space industry. In several ways it impacts our space engineering habits. Architects have to rethink their software designs to benefit these extra processing resources, while considering shared accesses constraints. Their real-time applications are based on complex dynamic architectures and require a schedulability proof for flight. A model based analysis approach has been selected for PLATO N-DPU ASW. An AADL model representative of the application has been designed, and analysed through AADLInspector and Cheddar tools. In this paper, we present the PLATO N-DPU ASW architecture, and the LESIA GERICOS framework it relies on. We also discuss the model and the tools used to perform schedulability analysis.
We.2.PO.614:25add We.2.PO.6 to agenda
Unboxing the Sand: on Deploying Safety Measures in the Programmable Logic of COTS MPSoCs
The lack of sufficient hardware support precludes the adoption of many Commercial Off-the-Shelf (COTS) MPSoCs in safety-related systems, such as those in the aerospace industry. Some recent MPSoCs come along with programmable logic (PL), primarily intended to offload some specific complex functions that can be much more efficiently implemented in hardware than in software, hence being such PL a kind-of-sandbox fully mastered by ASIC cores outside the PL. This paper proposes using PL in those COTS MPSoCs to deploy the support needed to implement safety measures efficiently to enable the use of those MPSoCs for systems needing high assurance levels. Hence, the goal is not mastering PL from the cores solely, but also allowing PL to provide monitoring (e.g. contention, diversity, watchdogs) and control (e.g. configuring QoS features) capabilities to enable the realization of a safety concept atop. While the work presented in this paper is in a preliminary stage, it already provides clear strategies and the final goals to be achieved.
We.2.PO.914:40add We.2.PO.9 to agenda
Towards a Novel UAV Position Tracking and Reporting System for Very Low Level Airspace
The integration of Unmanned Aircraft Systems into the airspace is a challenging, complex task for both operators and regulators. To ensure a safe and secure UAS integration, they should rely on wide variety of dependable sub-systems, including a tracking and position reporting system. This paper proposes a system architecture for tracking and position reporting of UAS in very-low level airspace. The system leverages low-latency communications and decentralized computing to offer highly available, consistent tracking information. Preliminary results suggest that our approach is especially useful for the traffic management and operations in densely populated areas.
We.2.PO.1014:45add We.2.PO.10 to agenda
A cross-domain framework for Operational DesignDomain specification
This paper presents a method to generalize the concept of “Operational Design Domain” (ODD) used in the automotive domain to any cyber-physical system. The approach proposes to use domain-level and meta-theories taxonomies to develop across domain ontology for the definition of the ODD.
We.2.PO.1114:50add We.2.PO.11 to agenda
Towards Real-time Adaptive Approximation
Cyber-physical systems (CPS) are real-time systems that operate in dynamic and non-deterministic environments. Models are often used for control and prediction, however do not reason on the trade-off between real-time constraints and uncertainty. This paper presents a conceptual model to reason on adaptive approximation in such systems. Furthermore, we envision a framework to allow the adaptivity of models, balancing between uncertainty and the real-time behavior of the system.
We.2.PO.1215:55add We.2.PO.12 to agenda
STARTREC: Verification of a safety-critical system for autonomous vehicles
In this paper, we present our on-going work on verification activities of the embedded software used in a safety- critical embedded system dedicated to autonomous vehicles. These activities are focused on the use of formal methods for the verification of functional properties on the embedded code, and statistical methods for the analysis of its Worst-Case Execution Time (WCET). The project’s goal is to address some technical barriers of software verification that will impact the safety demonstration of future autonomous driving systems. These barriers are challenging because of the high complexity of the embedded hardware and software, and appeal for methods and tools reaching the highest level of rigorousness.
We.3.A.115:00add We.3.A.1 to agenda
Dynamic Memory Management in Critical Embedded Software
Memory management has always been a delicate issue in critical embedded software because memory is often a scarce resource and many of the typical software errors jeopardizing the integrity of the execution of the software are related to memory mismanagement. Furthermore, critical software has had a tendency to grow in size and complexity in recent years, because it is using more and more complex algorithms in the critical parts of a system. The push towards autonomous mobility is a good example of the drivers for complexity reaching the most critical parts of software controlling such systems. This added complexity requires added flexibility in memory management that is not compatible with the traditional memory management techniques used for critical embedded software. In this paper we will first go over the traditional memory management limitations and the reasons behind them, we will then explore possibilities for going beyond them while being able to provide a high level of guarantees of correctness with regard to memory usage. We will present the ownership approach to dynamic memory management, and two programming languages used in critical software, Rust and SPARK, that have implemented this approach.
We.3.A.215:30add We.3.A.2 to agenda
Certifiable Memory Management System for Safety Critical Partitioned System
Aerospace systems are safety-critical systems that need to respect tight constraints in terms of execution time, resource usage and predictability. This industry is currently transitioning from predictable single-core processors to less predictable multi-core architectures. This transition reveals multiple challenges due to interferences. The contention of different cores on shared resources introduces interferences. This phenomenon prevents the required isolation between applications and the estimation of their worst-case execution time. To prevent interferences and allow certification of multi-core systems, guidance documents, such as the CAST-32A, require resource isolation but do not provide any implementation guidelines to do so. In this paper, we propose a memory manager to mitigate memory interferences generated in shared cache, main memory and memory bus. Our results show an increase in timing predictability by 68.1%. Aside the memory manager, based on the obtained results, we provide a set of recommendations in order to assist system integrators' decisions and ease the certification process by conforming to the current guidance.
We.3.A.316:00add We.3.A.3 to agenda
Whole-System Analysis for Memory Protection and Management
Software development without the reuse of software components is no longer economically feasible in the automotive industry, due to the increase in complexity of vehicle functionality, software and hardware in a market that is heavily cost driven. To facilitate this reuse, especially control system software development is done on a functional level using domain-specific languages (DSL) and model-based software development (MBSD). DSL and models are later on translated into a general purpose language such as C or C++. While such software is likely to meet functional requirements, it needs to be tailored to the specific application being targeted and the quality requirements posed by that application. One aspect that needs to be taken into consideration during this tailoring is the management of memory and its direct impact on quality requirements such as functional safety, security and performance. The proper management of memory is a non-trivial task and has to embrace not only the targeted hardware but also the infrastructure software such as an operating system. In this paper we present cAMP (constructive and Adaptive Management Partitioning) as a tool to influence model-based code generation to respect functional safety and performance requirements by performing trait-based data classification. The trait-based data classification is based on the output of concurrency-aware sound static code analysis performed by Astrée.
We.3.B.115:00add We.3.B.1 to agenda
Automatic Test Generation - An Industrial Feedback
This paper presents an industrial feedback about the use of automatic test generation in the development of a critical real-time embedded system (a space launcher).
We.3.B.215:30add We.3.B.2 to agenda
ROS communications profiling for bus load analysis from AADL
This paper presents a case study which settles the basis for a modeling approach dedicated to robotic applications build upon ROS, the Robot Operating System. The Architecture Analysis and Design Language (AADL) is used with specific properties that allow to analyze bus load and CPU load of the embedded hardware. This paper focuses on bus load analysis: a profiling of ROS communication services, based on standard performance reporting tools from Linux, permits to determine the values of those properties. A library of AADL models is proposed, both for ROS services and for robots and embedded computer boards. We explain how this library is built, and show how to use components from this library to model the robotic system, including the software, the hardware, and the deployment of the software components on the hardware platform. Then we demonstrate bus load analysis using the analysis tool included in OSATE2 (Open Source AADL2 Tool Environment). Combined with CPU load analysis, exploration of deployment solutions can then be achieved.
We.3.B.316:00add We.3.B.3 to agenda
STPA Analysis of Automotive Safety and Security Using Arcadia and Capella
This paper demonstrates the use of the Arcadia methodology and the open source Capella tool to implement a STPA-based analysis technique that augments the conventional HARA, HAZOP. The STPA approach extends the conventional methods to include a holistic perspective considering hardware, software, humans, and control failures in a balanced manner
We.3.C.115:00add We.3.C.1 to agenda
Static Data and Control Coupling Analysis
All current safety norms require determining the data and con- trol flow in the source code and making sure that it is compliant to the intended control and data flow as defined in the software architecture. In traditional static code analysis, data accesses via pointer variables and control flow by function pointer calls might be missed. Using sound static analysis based on abstract interpretation, it is possible to guarantee the absence of run- time errors that could cause memory corruption and control flow corruption. Furthermore, it is possible to guarantee that in the analysis, all data and function pointer targets are considered and that the possible data and control coupling is fully captured. In this article we propose a comprehensive methodology for statically computing a safe approximation of the data and control coupling between software components. Our approach incorporates global static data and control flow analysis, taint analysis and program slicing. It can detect critical data and control flow errors and allows to complement traditional code coverage criteria by the degree of data and control coupling covered by the testing process, helping to identify relevant previously untested scenarios. It can also demonstrate freedom of spacial interference between software components at the source code level.
We.3.C.215:30add We.3.C.2 to agenda
Automatic Support for Requirements Validation
The automotive industry is currently going through rapid changes from a mechanical industry to one driven by innovation in electronics and embedded software. This significant change creates also significant challenges to the industry. One of the most important is the ability to create safe vehicles, emphasizing the importance of safety by design. This paper is intended to contribute to current activities working towards an industry-wide development of reliable and secure systems. Correct by design methodology, including formal methods, have the potential to improve dependability of systems in this domain. And their use at an early stage of the development process ensures faster time to market. In this paper, we present tool support for our approach that aims at integrating the formal analysis and verification of functional requirements from early stages of the development life cycle, by using model checking technique. From informal requirement specifications the tool delivers models. They will be used to produce evidences that the requirement specifications are realizable, otherwise it can guide their revision. The approach is illustrated by a case study based on a specific function of autonomous vehicles.
We.3.C.316:00add We.3.C.3 to agenda
Property Expression and Verification in an Incremental Model Development Framework: a Case Study
The IDCM framework (Incremental Development of Conforming Models) supports incremental constructions and evaluations of UML behavioral models (architecture of components and state machines). This framework evaluates models with respect to temporal safety and liveness implicit properties. The specifiers and designers only describe models, they don't need to write down explicit temporal logic properties. In this paper, we show how explicit safety and liveness properties can also be described using semi-formal boilerplates, translated into classical temporal logic formulas which are themselves translated into testing transition systems. Adding evaluation means to model-based development (relation checking) is a way to develop models incrementally, following a spiral-based development cycle.
We.4.A.117:00add We.4.A.1 to agenda
Programming Neural Networks Inference in a Safety-Critical Simulation-based Framework
This paper presents a framework for the development and validation of a neural network in the context of a safety- critical function implemented in SCADE. It consists in a simulation-based training process to build a model in a deep reinforcement learning environment, to transform it into a SCADE model, and to perform again runs with the same simulation environment for validation of the safety-critical function. The approach is illustrated by a use case regarding an unmanned aerial system that calculates a maneuver to avoid collisions with other aircraft, infrastructure, and ground. The training and validation phase, including sensitivity analysis and reliability calculation have been performed successfully, as well as the implementation of the function in SCADE.
We.4.A.217:30add We.4.A.2 to agenda
Leveraging Influence Functions for Dataset Exploration and Cleaning
In this paper, we tackle the problem of finding potentially problematic samples and complex regions of the input space for large pools of data without any supervision, with the objective of being relayed to and validated by a domain expert. This information can be critical, as even a low level of noise in the dataset may severely bias the model through spurious correlations between unrelated samples, and under-represented groups of data-points will exacerbate this issue. As such, we present two practical applications of influence functions in neural network models to industrial use-cases: exploration and clean-up of mislabeled examples in datasets. This robust statistics tool allows us to approximately know how different an estimator might be if we slightly changed the training dataset. In particular, we apply this technique to an ACAS Xu neural network surrogate model use-case for complex region exploration, and to the CIFAR-10 canonical RGB image classification problem for mislabeled sample detection with promising results.
We.4.A.318:00add We.4.A.3 to agenda
Towards the certification of vision based systems: modular architecture for airport line detection
This paper addresses the certification issues for vision based systems embedded on civil aircraft to assist pilots during the ground navigation phase. We propose a design methodology, through the example of a modular architecture to detect ground mark lines in a color image. Our detection method is based on the combination of classical image processing algorithms, deep learning methods and a particle filter algorithm. We argue that the main interest of this architecture is to ease the certification problem when compared to an end-to-end neural network. We discuss about difficulties around certification and propose some arguments.
We.4.B.117:00add We.4.B.1 to agenda
Combining Real and Virtual Electronic Control Units in Hardware in the Loop Applications for Passenger Cars
In this paper, we present a novel approach where virtual and real electronic control units (ECUs) are combined in a hardware in the loop test system for component testing. For the testing of modern ECUs, multiple test environments exist. In the early stage, these include software in the loop (SIL) and hardware in the loop (HIL) testing. In SIL environments, the ECU code is executed on a standard PC, which requires virtualization of the ECU. In HIL testing, the ECU code is executed... Full abstract see attached file.
We.4.B.217:30add We.4.B.2 to agenda
Investigation of Scheduling Algorithms for DAG Tasks through Simulations
In the real-time systems sector, various task models and corresponding tests exist to model and verify the schedulability of task sets on the system at hand. While those models and schedulability tests have intensively been studied from a theoretical point of view, it is hard to make use of them to compare the actual execution behavior of scheduling algorithms on a real system. In contrast to schedulability tests, simulators can help to investigate the performance of specific scheduling algorithms. One of the most generalized task models to describe parallel tasks is the Directed Acyclic Graph (DAG) model that allows to represent tasks as a series of subtasks that depict the parallel computations and precedence constraints that denote the order in which the subtasks are allowed to execute. In this paper, we investigate various scheduling algorithms for the DAG model. For that, we first recapitulate the examined scheduling algorithms in detail and point out relevant differences. Subsequently, we present the evaluation of different global and federated scheduling algorithms. To this end, we generate random DAG tasks and simulate their execution on multiprocessor systems using scheduling algorithms such as global rate-monotonic (G-RM) and semi-federated (SFED) scheduling as well as global scheduling policies using the thread pool model.
We.4.B.318:00add We.4.B.3 to agenda
SytHIL: A System Level Hardware-in-the-Loop Framework for FPGA, SystemC and QEMU-based Virtual Platforms
The exception to the rule within the Virtual platform (VP) and simulation community that you can’t have both accuracy and speed has always been the use of FPGAs. Having the ability to run a VP at considerable speed, and being sure that the critical IP in question is being emulated totally accurately opens up the scope of functional verification to include the full software stack. Only hitherto this has typically been limited to very specialized and expensive emulators that are often difficult to scale. Being able to make use of this technology in a fashion conducive to continuous integration and test would be a game changer in many embedded systems groups. Recently this has become more possible, due to a couple of developments: First, Amazon has made available their ’F1’ FPGA-enabled cloud instances. They are not the only ones to go this route, and using FPGA cards within an in-house server farm is also becoming feasible. Second, the technology needed to make use of these devices within the context of standard simulation frameworks is also now better understood. In this paper we report on activity in this domain, specifically extending the reach of the FPGA not only to be able to emulate accelerators, but to be able to emulate the core IP in a device, namely the processors themselves. In doing so, the ’rest’ of the system can still be emulated in a standard simulation framework such as SystemC, meaning that Ethernet adapters, discs, video, etc. can all be modeled outside the confines of the FPGA. Through the SystemC layer, we can connect the model to the ’real world’ or other simulated environments such as QEMU. This gives the user a large degree of flexibility in terms of how the system is modeled and used.
We.4.C.117:00add We.4.C.1 to agenda
Checking validity of the min-plus operations involved in the analysis of a real-time embedded network
The network calculus theory is widely used to check that a network satisfies its real-time requirements. Such checking involves a lot a computations in the min-plus dioid. This paper shows how such computations can be formally checked using the Coq proof assistant in a realistic industrial context.
We.4.C.217:30add We.4.C.2 to agenda
Assessing a precise gPTP simulator with IEEE802.1AS hardware measurements
TSN (Time Sensitive Networking) standards are arousing growing interest in the critical on-board network community because of their promise of deterministic Ethernet networking. Among these standards IEEE802.1AS allows the synchronization of network devices. This protocol achieves much greater precision than other Ethernet synchronization standard such as NTP or PTP. Thus, it paves the way for new network mechanisms, such as the Time Aware Shaper, and allows the use of applications that are more constrained in terms of synchronization. In order to study the new mechanisms allowing to reach this precision, precise simulations are mandatory. In this paper, we review the different existing tools, extend the most promising one to incorporate the most advanced features of IEEE802.1AS and assess its behavior with respect to measurements. More specifically, we extend an OMNeT++ simulation library, calibrate its results following an extensive measurement campaign of switched Ethernet devices supporting IEEE 802.1AS to assess its performance in terms of precision.
We.4.C.318:00add We.4.C.3 to agenda
Smart Management of Virtualized Network Service Chains in 5G Infrastructure
Future 5G infrastructures promise to deliver un- precedented Quality of Services (QoS) guarantees through ultra- low latency and high data rate specifications that lead to handle many critical applications and services. Network Function Virtualization (NFV) is the paradigm that enables the implementation of network functions and capabilities as software components executed in virtual entities provisioned in general-purpose hardware. This paradigm plays a pivotal role to achieve management flexibility of 5G services and reduce infrastructures’ investment and operational costs. Recent advances in artificial intelligence (AI) and the large amounts of data collected on orchestration platforms offer new perspectives. Many recent publications advocate the use of AI in the orchestration of virtualized networks and many algorithms are proposed. Though, today small amount of literature works implement or test these concepts in a real platform that considers network service chains data. In this work, we present a framework for Service Function Chains’ (SFCs) profiling and management through Machine- learning approaches. We propose an extended network orchestration platform based on Openstack, Kubernetes (K8s), and Open Source Mano (OSM) orchestrator. The benefits of this architecture are shown by an algorithm that realizes proactive auto-scaling procedure for service chains. It considers features’ importance per service type while achieving a trade-off between services’ stringent QoS requirements and the cost of resources usage.
ERTS 2022 - IMPORTANT DATES
Abstract of Regular &
Short Paper submission (4 pages) : Sept.5th, 2021
extended to October 3rd, 2021 (any time on earth)
Acceptance Notification : Nov. 18th, 2021
Regular Paper for review (10 pages) : Jan. 9th, 2022
Final Paper (Short and Regular) :
Jan. 30th, 2022
Congress (new dates): June 1st to 2nd, 2022