Kyle Vogt, co-founder and CTO of GM Cruise, recently wrote an opinion post on Medium that argues for a new metric with regard to self-driving autonomous vehicle (AV) safety reporting.
GM Cruise, a subsidiary of General Motors, is currently working towards the creation of self-driving technology with applications towards a future AV taxi service. The company is now conducting real-world testing in San Francisco.
Like other AV technology companies, GM Cruise reports “disengagement” data to the California DMV, which in turn provides that data to the public. Disengagement is an incidence that occurs during AV testing wherein the human pilot behind the wheel is required to take over from the self-driving software.
“The data is really great for giving the public a sense of what’s happening on the roads,” Vogt writes. “Unfortunately, it has also been used by the media and others to compare technology from different AV companies or as a proxy for commercial readiness.”
As Vogt points out, disengagement data is currently the only publicly available AV safety metric, but it fails to provide a complete picture of the technology’s readiness with regard to widespread commercial implementation.
“After extensive testing in complex urban environments, we’ve come to realize there’s a threshold of environmental complexity above which it’s nearly impossible for even a well-trained, attentive, and responsive human to avoid touching the wheel,” Vogt says.
Essentially, disengagements are not necessarily indicative of AV tech readiness, especially when considering the environment where the testing is taking place. Factors such as inclement weather, pedestrians, other drivers, and cyclists can dramatically affect disengagement rates.
In his post, the GM Cruise co-founder goes into some of the more common scenarios where disengagement occurs, including poor driving on the part of human drivers, as well as limitations of the technology itself.
Vogt reiterates that disengagements shouldn’t necessarily be ignored, but rather, further metrics should also be explored. He argues this should include hard data that the tech can outperform a human driver, plus proof of an overall positive impact with regard to safety and public health.
“This requires a) data on the true performance of human drivers and AVs in a given environment and b) an objective apples-to-apples comparison with statistically significant results,” Vogt writes. “We will deliver exactly that once our AVs are validated and ready for deployment.”