Safety: Artificial additives

Date: Wednesday, August 2, 2023   |   Author: Jack Carfrae

VisionTrack’s latest AI-powered software can save umpteen hours of trawling though camera footage.

"I think the technical term is shedloads of data,” says Charles Morriston, VisionTrack’s head of professional services. He’s describing the number of clips generated by the average video telematics system. Now common among commercial vehicle fleets, they typically capture just about everything from an innocuous brush of a kerb to a full-blown shunt. 

They serve up multiple recordings – normally footage of the incident itself, plus a few seconds either side – as irrefutable proof of what took place. Hard to beat for the likes of targeted driver training and settling insurance claims. 

The trouble is, they create an awful lot of alerts, and even relatively small operators could wind up with hours of footage to sift through, before they really know whether any clips amount to more than trundling over a speedbump, which Morriston describes as “the bane of any fleet manager’s life” – assuming they have a video telematics system. 

“They’ll get these videos – and it might be a shock event or a high kerb mounting – and they have to watch them. It takes them a minute or so to find the video, watch it, and then they have to dismiss it as a false positive,” he explains. 

VisionTrack’s latest piece of software, NARA – Notification, Analysis and Risk Assessment – is designed to sort the videographic wheat from the chaff. Launched in February, it is essentially an analysis tool, which uses artificial intelligence to determine what is and isn’t worth a fleet manager’s attention. 

“What we’re trying to do is prove that watching video is going to become fairly redundant, particularly with the advent of AI,” says Morriston, “it analyses footage automatically, in pretty much near real-time… strips out all the false positives, and what you are left with is primarily actual incidents or collisions that you can then, in real time, inform the insurance company, or inform a fleet or risk manager.”

The system makes its decisions by absorbing “billions of data points” and includes parameters such as distance, speed, and vehicle type. It can, for example, correlate sharp deceleration with the proximity of another road user, and deduce that, even though there was no physical contact, there was a near miss, and the AI element helps it to spot common issues.

During our video call, Morriston shares an image of a “typical fleet manager’s dashboard” displaying the results from two days of video telematics at work. The fleet of 1,149 vehicles had covered 381,000 miles in that time and generated 576 ‘red’ events, which are harsh manoeuvres (a ‘black’ event is the most severe and involves an actual collision). This is our own calculation, but if you assume the clips last for an average of eight seconds, that works out at just under an hour and 20 minutes of footage to review. 

He then shows us a dashboard from a fleet of 1,272 vehicles using NARA, which had covered close to 2.2 million miles over a week and generated 2,892 red events. 

“It’s really skimmed those videos down and left 26 that might just need a double check by a human eye,” he explains, “the rest of those videos have automatically been categorised or dismissed.”

Assuming the same average clip time, that’s about three and a half minutes to review. 

The system also catalogues less severe clips which don’t depict accidents but might warrant a chase-up. “I’ve also got 379 [in the same example] that potentially require intervention,” adds Morriston, “it could be that it’s a near miss – no collision, nothing to do from an insurance perspective – but it could be a driver training opportunity.” 

Assuming the same eight second clip average, these amounted to around 50 minutes of review time.

Beyond the obvious efficiency and admin benefits, the tech’s two other big paybacks are said to be insurance and safety. At the operator’s discretion, it can automatically send footage of the severest events to the insurer when it issues the alert to the fleet, dramatically speeding up the first notification of loss – FNOL as it’s known in the insurance industry. 

“If NARA says, ‘actually, this is a black event because it involves an actual collision,’ then the insurance company can be emailed,” explains Morriston, “we’ve got APIs [Application Programming Interface: intermediary software that allows two applications to talk to each other] where we can push data out to insurance systems, or they can request that data on, say, an hourly or a five-minute basis.

“Even if your driver is at fault, it’s always better to be on the front foot with that claim. Launch that claim within that golden hour, and there are huge savings in time and cost.”

The company says it can conclude fault claims settlements within 72 hours and boasts of “proven claims savings on average of £2,000 for each collision detected”. The latter is said to be based on a 7,000-vehicle UK supermarket fleet, which generated an alleged £20m saving.

It can also be applied to just about any kind of vehicle you might encounter on a fleet, even down to the very smallest. “Because it’s all cloud based, I can take the footage from an HGV, from a light commercial vehicle, from a car, or from an e-scooter, and analyse all of that on the platform,” says Morriston, “it works with any kind of vehicle and any kind of camera. We can take third party footage if needed and analyse that post the event as well.”

The safety benefit is the speed at which the tech can alert HQ and provide supporting footage if something bad has happened. 

“It could also be from a duty of care safety perspective,” adds Morriston, “it may be a serious issue, and the driver might be in a ditch.” 

The firm is also trumpeting the system’s “device agnostic” nature. That means it can process video feeds from just about any connected camera, and you don’t have to be an existing VisionTrack customer to buy into it.  

When asked about the AI’s margin for error, Morriston admits that it is “never perfect,” but claims NARA, “started at about 97% accuracy, and within less than six months, we got that up to about 98%”. The argument is that a human performing the same function would go cross-eyed long before the tech. 

The company didn’t tell us how much NARA costs when we asked, and such advanced tech is unlikely to come cheap. Morriston admits it isn’t aimed at the smallest operators but says it is also not reserved for gigantic fleets. He suggests that those with a history of poor driving would benefit most. 

“From a prevention perspective, anybody would benefit from a camera, regardless of the scale of the organisation. Coming back to NARA… there’s a valid point there that, if I only have five vehicles that only generate one or two events per week, then yes, you could say that that’s a little bit more manageable. 

“But as soon as you get to 50 to 100 vehicles, especially if they are being driven badly, then you’ve now got to watch a lot more videos. I think the benefits kick in quite quickly, even with fleets of 50 or so, depending on how badly they are driven and on how critical an insurance claim is.”



Share



View The WhatVan Digital Edition