Today’s satellites are inherently inflexible and purpose-built for a single mission. In contrast, the community is exploring disruptive approaches towards in-orbit reconfiguration and on-board data processing. One of the most promising approaches is to use COTS components and open source frameworks, which decrease time-to-market and could even increase the pool of developers/users on the satellite. Following the success of AI/ML in terrestrial applications, our biggest interest becomes AI-oriented hardware devices facilitating deep learning inference and pre-processing of sensor data in-orbit. To adopt this technology in space, we have to perform several small studies regarding the suitability of various AI accelerators that could be placed next to a qualified on-board computer. Candidate COTS accelerators are: - Xilinx Zynq U+ (Xiphos Q8S) - Xilinx Zynq 7020 (X. Q7S) - Intel/Movidius Myriad2 (UB0100) - Nvidia Jetson Nano GPU - Google Coral TPU When considered for space, the most novel is Google TPU. Coral is designed specifically for neural network inference and can support AI methods with relatively low power and high performance. Its avionics integration could give many future customers the opportunity to develop AI for use in orbit via widely used AI/ML frameworks, e.g., TensorFlow and PyTorch. Microlab and OHB-Hellas will join forces to evaluate the suitability of such COTS in the frame of a Greek Space National Program (as first steps towards an intended Greek mission, postponing the collection of public institution needs). The current study will focus on performance evaluation and SW support, with big emphasis on TPU. Furthermore, it will examine relevant mitigation techniques and mixed-criticality avionics architectures allowing efficient uploading of new AI/ML models to the satellite during flight. OHB will contribute based on flight heritage and satellite designs. Microlab will rely on past ESA activities with COTS benchmarking and high performance avionics.