Former US Secretary of Defense Donald Rumsfeld once famously ruminated on the difference between ‘known unknowns’ and ‘unknown unknowns.’ What would he make, one wonders, of the recent reversal of policy by the US Air Force (USAF) regarding encouraging pilots and aircrew to report what they believe to be unidentified flying objects (UFOs)?
Sightings of UFOs peaked at 8,000 per annum at one time – not counting any sightings by military personnel, which remain a mystery as far as the public domain is concerned. The USAF has recently issued new rules intended to ensure the service keeps track of (and explains) as many sightings as possible.
Writing for phys.org on 17 May, former science advisor to the USAF, Ian Boyd, believes this initiative is due to the service’s need for clarity and to avoid confusion. In which belief he is probably almost certainly correct: uncertainty is anathema to the military, especially in the times of heightened tensions and potential kneejerk or overreaction that seem to characterise today’s world. The inability to identify an airborne object exhibiting erratic or unpredictable behaviour could easily lead to an incident that could – given circumstances at the time – escalate rapidly. Recent Russian incursions of Baltic airspace have caused quick reaction alert flights on several occasions, during which aircrew nerves must be wound tightly. How much danger might there be of fingers twitching inappropriately if UFOs were added to the mix?
Aircrews are disciplined and trained, of course. There is no doubt that the great majority of pilots would think at least twice before acting. But what about automated systems? Designed to react rapidly to emerging threats – and taught, in the case of artificial intelligence, to consider unidentified objects as threats until proven otherwise – who knows what the electronic brain of the aircraft might decide to do without human intervention? 737MAX, anyone?
The USAF’s initiative is laudable, since it removes the potential for ridicule that reporting UFO sightings has always carried. And the military have routine access to sensors and technologies that should make identification – either through physical characteristics or behavioural analysis – simpler, quicker and more accurate. Hopefully.
There remains a question, though, as to how effective some of those technologies might be. MONCh recently reported on the issue of artificial intelligence and the problems researchers are encountering in moving it forward. The Defense Advanced Research Projects Agency (DARPA) is looking to AI to automate air-to-air combat (see MONCh story Can AI win a Dogfight? on 22 May). Pushing the boundaries too fast may lead to hidden vulnerabilities that reveal themselves in catastrophic fashion at times of stress. On the other hand – AI may prove to be more reliable, more accurate and less volatile than Homo sapiens. The problem is, as Donald Rumsfeld might have said, “we don’t know what we don’t know.”