US Navy Collisions Point to the Risks of Automation on Sea, Air and Land

3133

By Amy Fraher

Four collisions involving US Navy ships this year have resulted in the deaths of 17 sailors and the injury of several more, along with millions of dollars in equipment damage. Several senior officers have lost their jobs as a result.

The worst of these accidents occurred in high-traffic areas in Asia: the USS Fitzgerald collided with a Philippine-flagged container ship close to Tokyo, and the USS John S McCain collided outside Singapore with a Liberian-registered tanker.

Both accidents involved modern, technologically sophisticated military ships colliding with much larger, heavier commercial vessels, which were most likely being steered via autopilot at the time of the crash. After hitting Fitzgerald, the Philippine container ship curiously continued steady on course for another 30 minutes, raising the distinct possibility that no human was steering the ship – or possibly even awake – when the collision occurred at about 130am. One expert described the absurdity of these collisions like a crash between a state-of-the-art race car and a fully loaded garbage truck.

We will not know the full details of what went on for quite some time as both the US Navy and the US National Transportation Safety Board continue their in-depth investigations. But there are certain commonalities in these accidents that I believe can tell us something about troubling automation trends in our rapidly evolving transportation system. There’s a fundamental problem with the industry’s reliance on technology to save the day when collisions become imminent, often in complex environments.

For example, consider the use of automation within these maritime collisions. A navy destroyer such as the Fitzgerald has a crew of about 300 officers and sailors on duty round-the-clock, equipped with some of the most sophisticated equipment available to support them. In contrast, commercial cargo vessels tend to have just 20 to 30 people on board, dictated by minimum legal requirements.

As a result, many duties are automated and the use of an autopilot for navigation is common. But autopilot has limitations – and it is not safe or recommended to use automation when navigating in dense, high-traffic areas which may require swift responses and manoeuvres to avoid a collision.

There are lessons here not just for the sea lanes, but also for other key developments in transportation: driverless cars, trucks and drones. In these industries, developments frequently outpace regulators’ ability to supervise their fast-paced innovations. In this void, unsafe operational practices emerge.

Regulatory catch-up

The race is on to perfect self-driving technology. Tesla has adopted the term “autopilot” in comparison to other more conservative approaches such as Volvo’s “semi-autonomous tech” or Mercedes’ “driver assistance” package. Google’s design completely eliminates the steering wheel and brakes only offering humans one option – to push the red “e-stop” button – when a crash seems imminent.

The UK just announced plans to test driverless trucks travelling in convoy. For now, regulations require a driver on board, ready to take control at any time. But there is a lot of disparity in self-driving technology design standards – how can this be the case when risks are potentially so high?

The first death in a partially autonomous car took place in May 2016, while a driver was in “autopilot” mode, watching a movie. Yet it wasn’t until 2017 that the US government began to sketch out regulatory oversight for autonomous vehicles.

Perhaps the car industry can learn some lessons from aviation about autopilot design and operations. Like the current car industry, aviation went through a similar push to automate in the 1970s and 1980s. It was determined that this would fundamentally improve things in two ways: improve performance and enhance safety.

To some extent, this occurred. But several aviation researchers have identified ways that technology solved some problems while creating others. These issues have been called “ironies of automation”. For example, studies identified problems as pilots found their roles shifting from active operator to passive observer, struggling to diagnose what the plane was doing. “What’s it doing now?” and “I’ve never seen that before” became frequently reported comments in studies of pilots.

Results revealed that replacing easy tasks with technological solutions did not lessen the human operator’s workload. In fact, it made the difficult parts of their task even more difficult. Over time it became clear that the airline industry trend towards higher levels of autonomy created new opportunities for confusion and mistakes – a situation called an “automation surprise”.

In another irony of automation, this cognitive dissonance often occurred in exactly the kind of unusual situation where advanced technology could have proven most valuable to their human operator. Yet, instead, they were doubly-burdened to sort through a confusing, dangerous and potentially escalating situation.

Visions of an unmanned future

Drones, according to the Federal Aviation Administration (FAA) are “the most dynamic growth sector within aviation”. It estimates that more than 7m unmanned aircraft will be flying by 2020 serving areas such as agriculture, real estate, industrial inspection, scientific expeditions, civilian maritime, entertainment and photography. Designers even envision personal transport drones that quickly whisk passengers and deliveries around dense cities.

Some experts believe that all aircraft including commercial airliners will someday be unmanned. The cost benefit for both airlines and passengers is just too persuasive. But what happens to an airliner full of passengers if drone automation fails and there is no pilot on board to manually take over and land the plane safely?

Serious questions remain about how manned and unmanned aircraft can safely operate in the same airspace, communicate with each other and avoid collision. Particularly in dense areas with limited manoeuvrability, such as the recent Navy collision areas. Commercial airline pilots have specific certification processes – but it is unclear how drone pilots will be trained, licensed, insured and supervised to operate.

Differences between operator experience and vehicle size, speed and handling characteristics in manned and unmanned aircraft means the risk of accidents will escalate, just like recent collisions between racing car-like military ships and garbage truck-like commercial ships.

These accidents signal that an overconfidence in the benefits of advanced technology can lull people into a sense of complacency about the associated risks. I am not arguing against the use of automation in transportation. But I do believe that there needs to be an active, alert human available with the ability to intervene when situations require it. And transportation regulators need to ensure that designers make that option available.

Did you subscribe for our daily newsletter?

It’s Free! Click here to Subscribe!

Source: The Conversation

1 COMMENT

  1. Please consult someone familiar with autopilot systems. They require an officer (or trained quartermaster) to manually set the course and some have additional inputs, but all are easily overridden and there is someone always available to take control. They will steer a better course than a human, but they DO NOT CAUSE COLLISIONS. Collisions are caused by inattention and ignorance.

Comments are closed.