Responding to a city 50% faster with artificial intelligence

In the world of emergency response services, time is of the essence.

Add to it the very demanding and high-stress nature of emergency events, and the personnel who handle the whole lifecycle of emergency response for public safety, could use every bit of help they can get.

Solutions providers like Motorola Solutions believe that technologies like video and intelligence can have crucial roles to play.

Dr. Mahesh Saptharishi who joins Motorola as its Chief Technology Officer and Senior VP from the company’s Avigilon acquisition, shared about Motorola’s vision of the integrated command centre workflow.

It’s one which typically starts with a call taker, and from there it goes to a computer-aided dispatcher (CAD), and then the frontline responder.

There are products that correspond to each role and one gets the sense the products that make it all happen, has to be integrated to allow a seamless flow.

Above all, the handover of information from one role to the next, has to be seamless, timely and informed so that the appropriate action may be taken by first responders on the scene.

Dr. Mahesh introduces the idea however, that what we know of as ‘integrated’ today, may actually not be integrated enough.

Timely knowledge for timely action with Artificial Intelligence

“it isn’t about a bunch of standalone components that are integrated at a very basic level, “ he cautioned.

“These solutions interact in simple ways. They will ask for information and there may be some logic that combines different sources of information for you,” he described the data-based querying technologies that exist today.

These current solutions may not be keeping up with reality of emergency events.

This is where artificial intelligence (AI) may be able to help.

“If I get a report about an incident from my despatch software, I would also like a little bit of context, for example how significant the accident is, and whether anyone is injured and so on,” Dr. Mahesh theorised.

So as opposed to data-based queries, there can be knowledge-based interaction that leverages context outside the sphere of just that interaction.

Knowledge-based interaction vs data-based queries

Here is an example of context-gathering and how it can be done to save time for the next role in the whole workflow.

Now the call-taker may be a different individual from the one handling dispatch duties; and the call-taker may have sensed ambient noises in the background and/or other pertinent information for example lots of people yelling and so on.

The information exchange between call taker and dispatch is often times, limited. There is a small window of time to convey information that kickstarts the urgent emergency action – so what knowledge shall be passed on, and what shall be recorded as pending status?

Dr. Mahesh said, “In an ideal and truly integrated environment, we should automatically be able to look at the audio that was part of the call, and decide the tone of the caller to further understand the events in the background surroundings. These can be auto-extracted.

“It could be something that the person who took the call, would have heard, but not necessarily had time to transcribe.”

Typically, there is a standard set of questions, and Dr. Mahesh believed that some of the answers can be automatically determined, by automatically processing the audio that is part of the original call, for example.

It’s little tasks like this, automatically executed by artificial intelligence, which can add up and saves previous minutes for the emergency response community.

Too to be manipulated

The automatic nature of the whole workflow, throws into stark relief, the areas that could be susceptible to manipulation.

Also Read:  IBM certifies a much-needed 140 data scientists for AI development

For example, audio that is auto-extracted during the call, could be faked to obfuscate the real situation at the scene of the event.

On top of that, is voice sentiment technology to determine if the person calling in, is genuine.

Dr. Mahesh said, “Results of voice sentiment tech in healthcare and other industries are promising. You are able to judge if the person is being manipulative, honest or stressed. These are qualitative metrics that can be extracted using AI, by just studying the person’s voice.

“Obviously it is not foolproof, there is lots of uncertainty. But ultimately, the person in dispatch can either ask the caller to describe again, or based on info that was automatically extracted, ask confirming yes or no questions.

“This reduces the time necessary for them to take action

At what point does the human hand holding the AI “robot”, stop?

As with any machine system that learns, there is a feedback loop between human operators and the robot, or in this case, the AI-based system.

Some of the feedback is explicit, but there is also confirmation implied by the operator. The latter happens when operator decides to take the next step based upon what the AI has suggested.

For example, if the AI suggests to dispatch a fire engine to the site of a burning car and the operator acts upon it, this is implicit confirmation that what the AI suggested is correct.

Over time, by just recording how the operator acts during the whole workflow of an event, via explicit commands and implicit actions, the AI learns.

Muscle memory

All of this is building up towards something which Dr. Mahesh termed ‘muscle memory.’

Before this happens, there would be a phase when the overarching powers-that-be would try to figure out whether operator judgement is similar to judgement of the AI behind the scenes.

If there is correlation between both, then the AI is allowed to suggest more things. “Because there is now, some level of trust that the Ai is doing something that the operator mostly agrees with.”

This may involve tweaking of machine learning algorithms until the system is actually able to expect what operators would ordinarily do when a type of event is reported in.

The machine would anticipate and proactively present the action to take.

“That’s what I call muscle memory.

“It’s certain things, humans do without thinking about it. We want the machine to also be able to react to the human without the human asking it.”

In the scenario of a burning car, the system would not dispatch the fire department, but only suggest it and present evidence for the operator to make the ultimate judgement call.

Responding to a city

According to Dr. Mahesh, it is very early days but the first step involves integrating all the different pieces and making them talk together, which is what the integrated command centre offers as a foundation.

“The next step, is the process and our goal is to incrementally roll out these capabilities within the next couple of years,” he said, adding that Motorola has a human factors research team that sit next to operators who use their system today.

“One of the things the team actively does today is measure the average response time for the most common types of calls that come into dispatch. We want to show that via automation and changing the user experience for operators, we can substantially reduce the time to respond to common emergencies.

“Our hope is to cut (response time) in half. That’s the outcome we are looking for,” Dr. Mahesh emphasised.

When they succeed, the same operator would be able to process twice as many events as they do today.

You might also like More from author

Comments are closed.