Challenges of an AV Regulator
May 26, 2021—The technology economics outlet EE Times recently published an article about the unregulated environment of autonomous vehicle development. The story focuses on the departure from—or indifference to—traditional automotive industry safe testing standards.
The article speaks to the difficulty faced by regulators, primarily at the National Highway Traffic Safety Administration, to produce industry-wide standards for a technology that’s unlike any previous vehicle advancement in history.
Reporter Junko Yoshida writes in EE Times that AV developers self-certify their vehicles using their own standards, and they voluntarily report whatever safety messaging they like. Those reports, Yoshida writes, can feel more like marketing material.
The NHTSA is on its way to writing standardized safety protocols that will carry the weight of federal enforcement. But the agency’s grand challenge is to produce rules that apply to an incredibly complex kind of technology.
In many ways, the NHTSA’s challenge reflects that of AV developers themselves, and that’s determining how artificial intelligence will replace human perception in driving.
The organization Partners for Automated Vehicle Education formed to inform stakeholders that grapple with issues like these. Director of Communications Ed Niedermeyer spoke to the challenge facing regulators.
“I think one of the pieces that's really important to understand is that it’s very difficult to write regulation for this kind of technology,” he says. “There’s not a lot of precedence for the kinds of challenges this represents.”
The difficulty lies in autonomous technology replacing the driver. In a traditional vehicle, the human driver gets a license test that gauges general ability and practice. But over time on the road, the driver is able to learn and take in a complex set of perception data to react to everyday road situations, often in an instant.
When the technology in question is the replacement of the human element, how do regulators create a replicable test that covers such a complex web of possible situations? Create a safety test with parameters that are too narrow, like completing a closed course loop, and the technology might tailor itself just for that loop and little more.
On the other hand, the task of building a test that accounts for all possible driving scenarios and ensures that artificial intelligence recognizes and reacts properly each time seems impossible in its complexity.
“It’s important to understand that the hard part of AVs is the randomness that you see on the roads,” Niedermeyer says.
The challenge reflects what developers of autonomous driving technology face. Through the use of advanced artificial intelligence and a suite of sensors, autonomous vehicles still aren’t able to perceive and react on the fly the way that human drivers can.
“These vehicles, they can only do what they are trained to do,” Niedermeyer says. “So if they don’t see something in their training data, they basically don't know how to handle it. The goal of development is to develop a system that is robust enough that can generalize to some extent. But what AVs are not doing is the same reasoning about things that we do.”
Yoshida writes in EE Times that AV developers currently issue reports that might recognize or casually mention current automotive standards, like ISO 26262 or ISO 21448. There is also an SAE standard for autonomous vehicle on-road testing, SAE J3018.
But the full rigor of traditional reporting isn’t present or required. This could create a veneer of ISO compliance without following the guidelines. Yoshida calls this “ISO-washing,” similar to the autonowashing phenomenon that creeps into ADAS feature marketing.
Those ISO standards have been used by traditional vehicle OEMs and apply to the mechanical and electrical components for the most part. ISO 26262, for example, deals with “functional safety” and is used to identify what happens if parts fail—and how measures can be taken to avoid catastrophe.
Vehicle components are many, but they are able to be covered by safety standards. Could regulators conceive of a similar standard that assesses causes and effects for every potential situation a human driver encounters?
“What it does not cover is the decisions, essentially,” Niedermeyer says of traditional standards. “If you think about failure mode analysis, you have to know how all the different pieces of the system are going to directly affect the others.”
Safety at the Heart of Progress
The NHTSA is looking to address the issue and has begun developing a safety framework for automated driving. In summarizing the challenge at hand, the agency recognized that previous rulemaking focused on the designs of these vehicles and “not necessarily the performance of the ADS [automated driving system] itself.”
The NHTSA says that the new safety framework could incorporate elements of existing ISO standards, as well as another that was developed for automated systems.
UL 4600 was developed by Edge Case Research and published by Underwriters Laboratories to analyze and document the ability of fully autonomous systems to function safely.
Rather than creating a list of benchmarks to check off, this standard “uses a claim-based approach which prescribes topics that must be addressed in creating a safety case.” That is, the technology developer must set real safety goals and then thoroughly explain how they are being met.
The NHTSA appears interested to incorporate UL 4600 and other current standards into its own rulemaking:
“NHTSA requests comment on the specific ways in which Functional Safety, SOTIF, and/or UL 4600 could be adopted, either modified or as-is, into a mechanism that NHTSA could use to consider the minimum performance of an ADS or a minimum risk threshold an ADS must meet within the context of Vehicle Safety Act requirements,” according to official documents.
As federal regulators move toward a safety framework, Niedermeyer says that there seems to be a real desire and culture of safety among AV developers. In no small way, their business models require public trust to operate.
“Fundamentally, we are talking about artificial intelligence making life or death decisions,” he says. “So I think there is more public concern around that because it’s unique in that way. It’s also why we have those challenges in testing.”