That modern military systems – from soldier radios to jet fighters and warships – are computer-intensive goes almost without saying. Furthermore, the military has not been in the driving seat of the core techno- logies, namely microprocessors, graphics processing units, random access memory chips, hard drives, system-on-chip devices etc, for at least a couple of decades. Thanks to Moore’s Law, progress in transistor density on processors, the life-cycles of computer hardware are orders of magnitude shorter than those of military systems, particularly major platforms. Consequently, defence has long embraced Commer- cial Off-The-Shelf (COTS) computing and learned to manage obsolescence through form, fit and function upgrades of hardware in technology refreshes on timelines considerably shorter than old-school mid-life updates – managing software updates even more frequently. Today, other pressures are driving military systems designers towards Modular Open Systems Architectures (MOSA). One such pressure is the desire for longevity. “I’ve heard people say they want to architect systems that have 50 years of viability,” Ian Ferguson of Lynx Software Technologies reports. “Although I think that’s a bit of a stretch.” There is also the desire to exploit new technologies, including artificial intelligence (AI) in areas such as real-time image and voice recognition, for example, which demand high-performance hardware. “Whose chip are you going to pick?” he asks rhetorically. “I can guarantee you that, in five years’ time, 90% of the companies building chips today will be gone.” Thirdly, rising international tensions are making governments acutely aware of the national origins of critical components, raising the risk that the US, for example, will insist that every piece of technology in a US aircraft has to come from a US supplier. Systems architects have to be ready to reshape computer systems to accommodate such changes. In theory, MOSA enables anyone who conforms to widely disse- minated interface standards to provide modular components. These standards determine how computer system modules – either hardware or software or both – connect with and communicate over networks. In practice, “anyone” is generally limited to trusted suppliers, but modern systems are so complex and contain so many components that it is not possible to vet every sub-component, particularly if the system is to take advantage of the very latest technologies. Just because a system uses open standards and all its components comply with a set of Application Programming Interfaces (APIs) doesn’t mean that the system is secure, Ferguson emphasises. Bad actors can potentially implement them and insert exploits that work on many systems. The approach to securing MOSAs involves compartmentalisation, monitoring and redundan- cy, he says. Borrowing a household metaphor from Microsoft, he says that, while most of us lock our front and back doors when we go out, we should be locking all the internal doors as well. That is the approach that Lynx has taken with its MOSA.ic environment (recently selected for the F35’s next major technology refresh, see www.monch.com news for 17 March) and a number of other technolo- gies it has built. “At a very high level, we lock all the rooms so that if somebody gets into the bedroom, for example, we recognise that it has been compromised and do something about it, but it doesn’t impact what’s going on in the living room, the dining room or the hallway.” In real, rather than metaphorical systems, the subsystems are restricted as to what other components they can talk to. Some, for example, only allow subsystems to communicate with the main computer. Also, traffic in and out of each functional module is closely monitored for abnormal behaviour, as in future might activities within each module. In Lynx’ systems, this is one of the functions of the job of the hypervisor that governs the rest of the system’s access to computing resources. “If the behaviour changes then it says “Hey, there is something weird going on. We are seeing non-regular memory accesses, we are seeing input/output calls that aren’t normal. Go and take a look to see whether this system has been compromised, or a piece of hardware has gone wrong.” Here, established redundancy and dissimilarity approaches to critical computer systems – such as flight control systems – can be applied in a new context. “What has changed with these connected systems is they are now looking to bring multiple functions onto a mission control panel that might start to blend some of these open standards,” Ferguson says. “I would regard it as an extra step on from what we have done before. You have different code, you have redundant hardware, you have different software from different teams to mitigate risk. You may see over provision; with these multi-core systems, you might see extra resources with parallel sets of functionality running.” Security of open systems remains a very challenging problem, and it is likely that some particularly critical areas will never use open standards. “They will be completely locked down, encrypted, perhaps with custom encryption code, and will likely only talk through a very controlled set of APIs to the outside world in a way that simply can’t be compromised,” Ferguson concluded. Peter Donaldson Lock All the Doors! The Key to Security in Open-System Computing 4 · MT 5/2020 Comment Peter Donaldson, with 25 years’ experience as a journalist and writer covering aerospace and defence technology and operations, is a regular contributor to MT.