How AI Is Automating Navigating Systems?

835

  • AI will bring considerable benefits to ship navigation.
  • For AI to be effective, it needs to pass the Turing test to demonstrate the computer can act like a human.
  • There are many challenges – technical, legal and operational – to overcome in developing autonomous ships. But he thinks progress is being made.

Artificial intelligence (AI) will bring considerable benefits to ship navigation, but there are still many challenges to overcome, according to the expert panel at Riviera Maritime Media’s How AI is automating navigating systems webinar, says an article published on their website. 

When was the webinar held? 

This event, held 26 January 2022 during Riviera’s Vessel Optimisation Webinar Week, brought together academics and experts from around the world to debate how AI will improve navigational safety and lead to autonomous ships.

On the panel were World Maritime University associate professor Dimitrios Dalaklis, GMATEK president Glenn Wright, Fraunhofer-Center for Maritime Logistics and Services’ head of department for sea traffic and nautical solutions Hans-Christoph Burmeister and Northumbria University PhD researcher Eva Szewczyk.

AI, Fourth industrial revolution

Professor Dalaklis defined AI and outlined its capabilities to problem solve by learning how to mimic human brains and reactions. He said AI developments could lead to autonomous systems and eventually unmanned ships. “We are in the fourth industrial revolution,” he said. “With AI we are in a new operating paradigm.”

For AI to be effective, it needs to pass the Turing test to demonstrate the computer can act like a human. This means machines need to operate rationally and within task confines.

“Autonomous vessels should abide by the collision regulations,” said Professor Dalaklis. “AI systems should make decisions by themselves with no human input.”

Challenges to overcome 

There are many challenges – technical, legal and operational – to overcome in developing autonomous ships. But he thinks progress is being made.

“Change is here to stay,” said Professor Dalaklis. “We are in an era of reduced crew on board and unmanned enginerooms, so we should concentrate on finding solutions to the challenges.”

Mr Burmeister agreed legal issues and technical details need to be solved and also said human-machine interfaces should be improved. “In the next three years the robustness of these systems should improve so they can be used in navigation to prevent groundings,” he said.

Mr Burmeister said there was a need to deal with “interfaces between machines using AI and humans” for remote control and autonomous systems.

Tactical voyage optimisation & hazard avoidance

Fraunhofer-Center for Maritime Logistics and Services is researching how AI is used for “tactical voyage optimisation and hazard avoidance, using classical algorithms” for safer navigation.

Ms Szewczyk explained what could go wrong with AI technology and who would be to blame. She said there were “varying levels of autonomy” from having crew on board remotely operated vessels, to semi-autonomous and fully autonomous ships.

If there is a collision or a grounding, there could be legal issues. “If something goes wrong, investigators would be looking at standards, departure from collision regulations, seamanship and shipmanagement,” said Ms Szewczyk. “There could be liability issues for equipment manufacturers if there is an accident.”

Although there could be many causes of a collision, product providers need to “explain the AI model and standards for the technology, its integration and interaction with operations,” said Ms Szewczyk.

“As technology has a growing influence on decisions, humans must not be made the scapegoat.” She also asked if classification societies and technology verification organisations would have liabilities if there was an accident.

To be their eyes & ears

Mr Wright explained how AI systems are reliant on sensors “to be their eyes and ears” for observing the surrounding environment, identifying hazards, for better understanding and making decisions.

But inaccurate information could reach the decision processing modules and programs from sensors. “Sensors will degrade over time,” said Mr Wright. “Sensors have failures and there is interference.”

Observational sensors, cameras and navigation aids above the water line and those focused underwater such as sonar, could all be affected, as could sensors on board measuring engineroom systems and those observing the outside environment.

Possible solutions would be to keep humans in the loop for monitoring and decision support. “We need help from labour, keeping seafarers in the loop,” said Mr Wright. “But their roles will be changing and evolving.”

Attendees of the webinar agreed that humans would remain in the loop and would need retraining. They were asked a series of question in a poll of their opinions on how AI is automating navigating systems.

Question by attendees 

In the first of these questions, 94% of those who responded agreed monitoring and shore-based personnel supervising AI-applications in maritime navigation should be trained according to (to be developed) STCW principles, 6% disagreed.

Webinar delegates were asked if maritime regulators will agree in the coming three years on how to evaluate safety for maritime AI-applications that are independently making safety-critical decisions, such as for collision avoidance or anti-grounding. 61% agreed and 39% disagreed.

Attendees were then asked: what is the main driver for today’s maritime AI applications in navigation? 52% said it was for better ship efficiency, 22% for better safety, 19% for looking or being innovative and 7% voted for new logistics offers.

In another poll, delegates were asked: how could automation go beyond collision regulations (COLREGs) Rule 5 that merely requires sight and hearing? The majority (81%) of responders said through greater fusion of multiple sensing capabilities to form a more comprehensive picture. 

Another 19% said bandwidth could be increased to include night vision and other sensors such as radar, sonar, microwave and lidar. None voted for sight and hearing range can be extended through 360º and to greater distances.

Attendees were also asked how sensors, like seafarers, are required to pass physical and medical exams to be fit for duty? 55% of those responding voted for brittle tests (can sensors be relied upon to consistently perform as expected? Another 36% said visual acuity tests (are the sensors seeing what I am seeing?), then 9% thought it should be resolution tests (can sensors see as well as or better than I can?).

In another poll question, attendees were asked what distinctions are made between sensor degradation and malfunction. 35% thought normal operations with natural degradation, for example fog, rain, ice and salt encrustation. 

30% voted for normal operation with abnormal degradation (interference and misalignment), and another 30% said abnormal operations due to fault or failure (such as power or communication loss or failed internal components). Just 5% voted for abnormal operation due to accident or nefarious activity (damage and destruction).

Finally, attendees were asked if voice recognition applications seem the most promising interface between humans and machines, in the case of a mixed operations environment. 32% disagreed, 20% strongly disagreed, while 28% agreed and another 20% strongly agreed.

Did you subscribe to our daily newsletter?

It’s Free! Click here to Subscribe!

Source: Riviera Maritime Media