Formal testing methods on system and software models can enable early verification and detection of potential problems which would otherwise only be detected at later development or integration phases. To execute formal tests, the test criteria or requirements must be expressed as formal properties instead of the typical textual form. A number of previous studies have been conducted in the area both for ground segment systems (e.g. Investigation into the automated derivation of tests from specifications (FORDAC) and master thesis work looking into enhancing test automation via Natural Language Processing (in particular to link requirements specification with test code used by the TEMPPO Designer for E2E Ground Segment Reference Facility tests) and, more extensively, in the embedded onboard software domain (e.g. COMPASS, TASTE, Model Checking for Formal Verification of Space Systems, ..).
The idea is to assess the outputs of these activities in the context of application to the ground segment software engineering domain. Model checking and theorem provers, together with behavioural modelling languages shall be assessed in relation to existing software engineering workflows, including automated testing during development, test case generation from models (e.g., in the context of EG(OS)-CC and the Paperless Ground Segment Engineering framework) and automated E2E test case development for the Ground Segment Reference Facility.
The goal shall be to identify the required model properties, modelling tools and languages necessary to conduct formal model-based tests and model-generated test cases. The associated workflow, including transitions across lifecycle phases, benefits and drawbacks shall be identified, examples prototyped and recommendations derived for future work in the area.