Skip to main content

MBSE

In the overall digitalisation context, i.e. having computers assisting all our tasks, it is difficult for a computer to understand the content of a document. To do so, the information that is in the document must be expressed in a different way, more structured (diagrams, tables), or with less expressive power but with stronger grammatical rules than the natural language. This way of expressing information is called a model in this context. The goal of MBSE is to replace most of the documents by models. And therefore to replace document editors with model editors, which are specific of the topic that the model is about. And finally to transfer these models from tools to tools, all along the project development, and to enrich them, e.g. from requirements models, to architectural models, to design models, to test models, etc.

The ability to create a flow of information transported from model to model, and from tool to tool, is called Digital Continuity. The fact that the tool can exchange models and understand the model of another tool is called interoperability of tools. The information included in the document has a meaning, which is called its semantic. Likewise, the meaning of models is also called their semantic. Semantic can also be represented by a model, in this case it is called a "conceptual data model" , or an ontology. An ontology is a representation of the concepts that are used by information, and of the relationship between the concepts. It gives a (semantic) structure to the information. It is not possible to create a simple and complete ontology able to describe the information contained in a document written in free natural language. But, as a model is a specific part of the information, it is possible to create an ontology for a particular (set of) models.

Tools can exchange information by pure item-to-item translation of the content. If we compare to language translation, the english (shooting in one's foot) will be translated in a foreign language by a gun sending a bullet in a foot: the tools will not be interoperable. Instead, if the two tools share the concept (in their own ontology) of "making a mistake that backfire on yourself", then the translation will go up to the ontology level to translate forth and back the right expression in the right language. This is called semantic interoperability.

 

THE CHALLENGE

ESA has set out strong ambitions for the European space industry in its latest Technology Strategy, including a 30% schedule reduction, and one order of magnitude of cost efficiency improvement. These targets cannot easily be reached solely through product technology improvement. Process change is needed!

Digital continuity is considered to be a major technique to enable:

  • a continuous flow of information to reduce source of errors throughout the life cycle;
  • the connection of all the engineering data to improve traceability, allowing change impact analysis and assessment of the impact of non-compliance to be performed more quickly; 
  • the availability of a single source of truth for each engineering data to allow system engineers to make accurate and up-to-date system trade-offs, avoiding late system changes and reducing cost;
  • the interface to many system design analysers to increase the proportion of verification by design, reducing the amount of testing required, and decreasing the overall project timeline.

 

WHAT IS MODEL BASED SYSTEM ENGINEERING?

It is System Engineering which is using preferably models rather than documents. It is a set of modelling techniques (methods, languages, tools). It is used to build a product, and describes all the facets of the product. Therefore, all the elements describing the product (its artifacts) can be manipulated by computers at a low level of details. The computer can therefore make all sorts of detailed relationships between the products artifacts. It can relate together two different disciplines using the same data (mass is defined by mechanical people and used by software people), it can relate a requirement with its implementation and with its test , it can relate a customer need with a supplier solution.

There are existing tools which are called Product Lifecycle Management (PLM) and Product Data Management (PDM). They  manage the products through their artifacts (“business objects”), but only globally. These tools do not enter into the artifacts, nor capture their relationship. For example, a functional model is identified as an artifact, but its relationship with e.g. the Computer Aided Design model, or the avionics architecture, is not addressed.

It is the role (and added value) of MBSE to make these links, and to look inside the artifacts. By doing so, MBSE transforms the product data into information.

Digital Continuity

Commercial tools start to address some of these problems through PLM and PDM, but in their own context, which does not offer the required industrial interoperability across tools, and may create unwanted vendor lock-in. The challenge is to implement digital continuity, which aims to create the above-mentioned links between the artefacts, across heterogeneous tools, by use of model-based techniques. These links need to be created in the three dimensions of system engineering:

  • across disciplines
  • throughout the life cycle
  • along the supply chain

It is important to note that, in MBSE, SE (system engineering) must remain the objective, MB (model-based) is ‘only’ an enabler. Therefore, the technology must be thought as ‘MB for SE’, considering model-based engineering as a lever for system engineering.

 

GETTING TO THE SYSTEM ENGINEERING OF TOMORROW

System engineering is the art to define, design, realise and verify a system in a three dimensional space: across disciplines, along the lifecycle, through the supply chain (ECSS-E-ST-10C)[1].

Mastering system complexity is at the root of our needs. Initially manageable by a single or a small team of system engineers, the complexity of some of our space systems starts to exceed what can be thoroughly encompassed by a human team. Moreover, some traditional assumptions (e.g. the a priori separation into two predefined space and ground segment), may not hold anymore, thus creating even bigger systems. System complexity includes architectural complexity (due to multiple configured, tightly coupled elements), functional complexity (due to a high number of interrelated mission requirements), organisational complexity (due to the number of distributed actors). This complexity cannot be handled any more via documents.

 

CHALLENGES FACED BY OUR SYSTEM ENGINEERS

Because the functions become extremely complex, entangled, and full of exception and dependencies, or because the concept of operation has significant system implications, textual descriptions quickly become inadequate to describe the system behaviour completely and consistently.

->The system engineer needs formal representations beyond textual descriptions

The amount of information contained in the recent systems cannot be manually distributed across disciplines and along the supply chain.

-> The system engineer needs a controlled exchange of data supported by automated mechanisms.

Change impact, traceability and relationships between the various elements or aspects of a system cannot be achieved through an unorganised set of information.

-> The system engineer needs a structured knowledge of their system

The exchange of system data by documentation, email, or individual files makes it impossible to provide the right version of the right information to the right person at the right time.

-> The system engineer needs a single source of truth, consistent by construction, from which appropriate viewpoints may be extracted to communicate about the system.

The evolution of the system becomes more and more difficult to predict, as the evolution of key data may not be visible.

-> The system engineer needs, for progress monitoring, a dashboard with system engineering information such as main budgets (mass, power, link, etc.), key parameters, coverage statistics, trends, etc.

The history of the system elaboration is sometimes lost, and the lack of traceability does not allow, at the end of a project, requirement rationales or trade-off justifications to be recalled.

-> The system engineer needs traceability across the system of both formal derivation, satisfaction and justification of requirements versus design, as well as links to collaborative and informal aspects such as design and trade-off decisions at review milestones. 

The amount of interface with other stakeholders, customers and suppliers is increasing and cannot be handled only by loosely coupled documents.

-> The system engineer needs to create consistent baselines of system engineering data for communicating with customers (e.g. reviews) and suppliers (e.g. subcontracting a system element) and a mechanism to allow for the distribution of reduced models to external parties to protect intellectual properties.

The reuse of elements from previous projects is difficult when done from documentation, and the changes needed to adapt products to new requirements are spread over unidentified pages of documentation.

-> The system engineer must be able to initialise a system with a reusable set of consistent data (libraries), or add reused data during the project, with the capability to identify the required deviations and branches.

In the early stages of a project (i.e. Phases A, B1 and B2), the lack of an (as much as possible) exhaustive relationship model (dependencies) among all entities of the project and system, and the lack of the level of maturity of the represented/available information, quite often push the decision making process towards expensive and complex solutions. Such solutions might be even revisited and/or disregarded in later phases of a project (Phase CD).

->The system engineer must be able to grasp all relationships (even though they are not yet defined/consolidated) among all entities of the system (incl. entities of the project) and their maturity, in order to build a proper risk assessment and decision-making process.

 

WHAT IS SPECIFIC TO THE SPACE SECTOR?

The space domain has some particularity that can make the deployment of MBSE difficult:

  • The projects are performed by wide consortia involving various industries, themselves sometimes made up of other consortia (for example, the Galileo 2G project includes 11 consortia). This may be due to geographical return constraints, European commission context, ‘best practice’ constraints, industrial product policy for dual sourcing, etc.
  • Each space actor may or may not be trained in MBSE.
  • Each space actor may or may not have its own MBSE standard/formalism.

Integrators and suppliers do not necessarily share the same modelling levels, the same modelling goals, nor the same engineering constraints. This results in possible large differences and needs in terms of methods, semantics, model usage, modelling concepts, rules, practices, and tooling.

Sharing technology would be obviously the most efficient way, as all the engineering stakeholders would use the same methods and tools to fulfil their needs. Unfortunately, the diversity of engineering needs, and the heritage in technology investments of all the stakeholders, make such a situation unrealistic.

The next technical solution is to bring together formalism, languages, and even meta-models. But this is not sufficient, because it often happens that the model building and conformance rules are different, the semantics have discrepancies, and the process or the tools generate constraints. In addition, the space engineering community will never be able to converge on a ‘one size fits all’ metamodel that covers all concepts and is accepted by all space stakeholders.

Instead, semantic interoperability should be targeted, rather than shared formalism, standards or languages, sometimes leading to impoverishment of the engineering quality. It is ultimately more efficient to rely on automatic transformation tools than on modelling constraints on both sides. Native entity practices and methods should be preserved so as not to hinder engineering quality.

COTS

WHAT are COTS?

Commercial Off The Shelf (COTS) components and modules are assemblies, modules or parts designed for commercial applications for which the item manufacturer or vendor solely establishes and controls the specifications for performance, configuration, and reliability (including design, materials, processes, and testing) without additional requirements imposed by users and external organisations.

COTS for space

Space is becoming a more and more competitive sector, asking continuously for higher performance figures while reducing the overall cost from mission inception up to end of life decommissioning. This has consequences at all levels down to the selection and procurement of individual building blocks and components.

In parallel to this trend, electrical, electronic and electromechanical (EEE) parts designed for terrestrial applications such as automotive and other industrial sectors show high reliability when produced in massive quantities and while being subject to other industrial qualification schemes (for example, AEC-Q automotive standards).

Although some solutions matching space needs already exist, there is still a gap between space and terrestrial applications of components, and proper methodologies have yet to be developed and approved to allow a more systematic usage of COTS components and modules for space applications.

COTS for institutional space applications — the expected advantages

The expected advantages of COTS for institutional space applications are:

  1. Performance: where equivalent performance is not obtainable by classical high-reliability (Hi-Rel) components
  2. New Capabilities: where Hi-Rel components for performing a specific function don't exist yet
  3. Cost: typically for large volumes or low reliability/low radiation application where important risks might be taken
  4. Availability: benefit from production capability of supply chains for terrestrial use (in terms of modules)
  5. Time: shorter lead times and lower risk of part unavailability (though this advantage might not be always clear, depending on procurement scheme, taking into account of quick obsolescence cycle of COTS components and their limited shelf life)

COTS for institutional space applications — the challenge

The volume of COTS procurement for a specific spacecraft project is typically low and it is difficult to guarantee full traceability of the components. Radiation performance of the component in particular can differ substantially between the procurement lots even if the component type is the same. In this scenario, reproducibility of the radiation test results is challenging.

__________

related ESA links, publications and webstories:

 

Marine litter

Following some preparatory studies, in 2019 ESA launched a Discovery Element campaign on marine litter, asking for ideas to detect and track seaborne plastic litter using satellites. Based on these ideas, all submitted via the then brand new Open Space Innovation Platform, 26 innovative projects have been kicked off in 2020 to together better map the direction humanity should take to achieve this ambition.

The campaign was overseen by ESA's Paolo Corradi, Moritz Fontaine and Leopold Summerer and the resulting activities were overseen by ESA's Alessandra Ciapponi, Paolo Corradi, Erio Gandini and Peter de Maagt.

The activities list (tab on the right here above) provide the status and outcome of these activities.

Quantum Technologies

Quantum Technology for Space

Quantum mechanics is both a pillar of modern physics and a powerful tool to explore its boundaries. Over the past century our evolving understanding and control of quantum mechanical effects has allowed us to probe and manipulate the fundamental building blocks of the world around us in increasingly sophisticated ways.

Key enabling technologies have followed, such as lasers, transistors, solar cells and atomic clocks, that form the foundation on which many elements of our modern society are built. Recent progress on robustly exploiting quantum effects, such as entanglement and superposition, has led to the development of quantum technologies, that enable the processing of quantum information in various ways - quantum computing, sensing, simulation, cryptography and communication - and have the potential for far-reaching impact on our society and economy.

Parallel developments in space science and technology mean that the space environment, with its unique vantage and conditions, provides a powerful framework for novel applications of these technologies, as well as for tests of the foundations of physics.

Within this topical cluster you can find the different Discovery element quantum technology related activities listed. The aim of the cluster is to bring together researchers across activities that relate to quantum technologies to explore the synergies with space science and spur research on new and innovative scientific and technical concepts. It is related to several ongoing and past activities within ESA with the same goal.

Quantum Technology Cross-Cutting Initiative

Quantum technologies and innovative concepts for quantum information processing have been the subject of ESA research activities since 2002. The Quantum Technology Cross Cutting Initiative (QT-CCI) will build on this heritage and coordinate future ESA quantum technology activities with a large ESA-wide consultative process, and with a very close coordination with the needs of Industry, Research and Science. The implementation will later be executed in the most appropriate ESA programmes. This initiative will bridge different programmes, technical disciplines and technology levels, encouraging external partnerships.

The main QT-CCI objectives are:

  • Identify and support the strategic interest of ESA member states, industry and academia.
  • Stimulate ESA internal and external collaboration.
  • Boost activities to raise TRL; implement in-orbit validation/demonstration (IOV/IOD) missions; increase research, development and testing capabilities; demonstrate applications and services.
  • Provide increased visibility (internal and external) on quantum activities for space.

Quantum Information Processing Campaign

Quantum computing has the potential to improve performance, decrease computational costs and solve previously intractable problems. In particular, it could become a key capability for demanding tasks such as data storage and retrieval, image processing, artificial intelligence and machine learning, and the optimisation or simulation of complex systems; ESA’s Agenda 2025 highlights quantum computing as key for translating big data into smart information and services.

The Quantum Information Processing campaign was a call for ideas focussing on novel quantum information processing technologies or major advancements in known concepts that could be applied to ESA's activities. Activities that arose out of this call can be found in this topical cluster and also on the Open Space Innovation Platform (OSIP) at this link.