As your business looks to save time and money by employing automation strategies, be sure to consider the humans' role in the entire picture.
Humans in the AI Loop
Article Sep 12, 2020
Lisa Douglas
In 2016, Fincantieri, an Italian shipbuilding company based in Trieste, Italy, built the MV Viking Sky for Viking Ocean Cruises. If you recognize the name, that would be because, on March 23, 2019, the cruise ship suffered engine failure just off Norway's coast with 1,373 passengers and crew on board. A partial air evacuation and later sea evacuation rescued all passengers, but the entire excursion was likely more adventurous than anyone planned for when booking their vacation.
And why did this happen?
According to media reports, the ship had relatively low oil levels. Oil is necessary to lubricate the engine, and so when levels are dangerously low, there is a risk of engines overheating or conking out. At worst, it could even start an internal fire. During that ill-fated voyage, there was a storm, and the heaving and sloshing triggered the oil level sensors, which then signaled the engines to shut down.
Human in the Loop or Out of the Loop?
For the MV Viking Sky, the onboard sensing system initiated engine shut-down when the oil levels drop to dangerous-low levels – humans were removed from the process. This safety mechanism may seem prudent at a cursory glance. However, as happened that night, when engines shut down in the middle of a massive storm, it puts everyone's life at risk.
The captain did manage to throw the anchor to keep the ship from drifting ashore and knocking against dangerous rocks. The passengers and crew were reduced to spectators while their vessel bobbed like a cork on the water surface, deathly afraid of the worst eventuality – the cruise ship capsizing.
These are some of the inherent risks of humans out of the loop (HOTL) systems, where AI systems have sets of criteria to follow and take specific actions without deferring to a human expert. The opposite is humans in the loop AI systems (HITL). These systems operate differently with the same overarching principle: the system refers to a human at critical intersections/crossroads before initiating any action.
Process of HITL for AI Systems
Experts assert that AI models that don't have any human involvement are flawed. This involvement can take two forms: having the AI defer to a human for key decisions, whether or not it 'knows' what to do, or having the AI defer to a human when it cannot decide or offer an answer to a problem. In the latter, the AI logs the human's response and may, in the future, act automatically under similar circumstances.
How Does the Process of HITL Work?
First, an AI system receives relevant data and stores it according to a specific indexing system or functional design. The software defines actions according to different circumstances. With complex decisions, before taking action, the system assigns a confidence score to its algorithm variables, describing the accuracy of the judgment made by the AI.
A software engineer sets the required confidence level for specific judgment calls. When the confidence value drops below the engineer's threshold, the system should defer the decision/judgment call to a human. This decision will be applied to that instance and help make the algorithm smarter – the machine learns by interacting with humans.
Many business models apply the HITL procedure, from Google's reviews and web page indexing, to Pinterest's method for passing display posts.
Why HITL?
The consensus among software developers and computer scientists is that humans should still be involved in the automation process of critical business functions. Even though AI systems are getting smarter, they lack specific capabilities to integrate long-term qualitative and quantitative objectives. Machine learning accuracy increases by combining machine learning with human interventions for complex business situations.
In the long haul, this means that, although AI systems are getting smarter, there will always be a place for humans to work alongside them. Unlike humans, AI systems have limits in understanding long-term strategy and nuanced business objectives.
Therefore, it becomes critical to involve humans during software design so that they understand what the AI can and cannot do. It may also include building safety mechanisms for AIs to override humans and vice versa, depending on circumstances.
Conclusion
In the case of the MV Viking Sky above, it should have been possible for the captain to override the safety sensors and turn the engines back on if only to get the cruise ship to a safe docking spot. Such is the danger of building systems that exclude the very humans they are supposed to be working for.
Software developers are currently operating the 80-20 principle with regards to incorporate AI for business functions. Eighty percent accuracy is too low a number for systems that will be deployed to real-world applications with thousands of potentially life-threatening decisions to be made, for instance, self-driving cars.
The world is still a long way from entirely autonomous AI systems. Even as businesses look to save time and staffing costs by employing automation strategies, they must always consider the humans' role in the entire picture.
Looking for a guide on your journey?
Ready to explore how human-machine teaming can help to solve your complex problems? Let's talk. We're excited to hear your ideas and see where we can assist.
Let's Talk