Car companies battling for their share of $2 trillion in annual global auto sales increasingly lean on shiny tech that takes over some of the driving from humans.
Boasting names such as Autopilot, Super Cruise and ProPilot Assist, these systems — whose radar and cameras are the building blocks of self-driving cars — are part of a growing effort by manufacturers to woo with computing power rather than horsepower.
But on the heels of two Teslas that crashed while on Autopilot, automakers find themselves increasingly torn between hyping the tech and warning owners about its limitations.
“It’s on us to educate people about what’s allowable and what’s not allowable,” says Andy Christensen, lead engineer on Nissan‘s ProPilot Assist. “We don’t want drivers to be overly confident. (The tech) is there to assist you, it’s not driving for you.”
More from USA Today:
Tesla driver in Utah crash kept taking her hands off wheel as car sped in Autopilot mode
After a Tesla crash, more doubts drivers can be trusted with driver-assist tech such as AutoPilot
Tesla, Uber crashes spotlight automatic emergency braking. Here’s what it won’t do.
More than a dozen manufacturers from Audi to Volvo now offer so-called ADAS options (Advanced Driver Assistance Systems) on their cars. As questions resurface about whether the efficacy of such driving aides lulls drivers into complacency, automakers pack manuals with disclaimers, provide in-car audio and visual warnings, and direct salespeople to remind customers that such features should not be abused.
But the magical perception of driver-assist tech — hammered home in their names, advertising and even executive comments — can obscure an operational reality that demands drivers constantly monitor the self-driving system and sometimes be prepared to take over in a split second.
Consider Nissan’s new television ad for ProPilot Assist. It opens with a Star Wars spaceship threading the needle between two enemy cruisers, then cuts to a woman driving a Rogue SUV on a bridge as it heads between two semi trucks.
“I’ve got this,” she tells her passenger, and then the vehicle is shown driving itself between the trucks as her hands hover just above the steering wheel — but not on it. The car is using its sensors to detect the lane markings and center the car, but frequent input to the wheel by the driver keeps the system on.
When Mercedes-Benz introduced its 2017 E-Class, it touted the car’s tech chops in a TV spot that asked whether the world was ready for a vehicle “that can drive itself.”
Although the commercial offered disclaimers at the bottom of the screen — “Vehicle cannot drive itself, but has automated driving features” — Mercedes quickly pulled the ad in response to criticism it was misleading to consumers.
Cadillac’s Super Cruise system is billed in ads as “the world’s first true hands-free system for the highway … no need to tap the wheel to show you’re still there.” But, the ad quickly adds, “that doesn’t mean you can check out.”
The system uses a head-tracking camera to detect whether the driver is looking away from the road ahead. If necessary, the top of the steering wheel will flash red, alerts will sound and the seat vibrates to get the driver to take over. Super Cruise only works on pre-mapped highways, but on such roads ads show hands in laps.
And not long after Tesla launched its boldly named Autopilot system in 2015, CEO Elon Musk declared “it’s probably better than a person right now,” adding that it soon would “drive virtually all roads at a safety level significantly better than humans.”
Musk more recently said Autopilot will “never be perfect,” and after two recent crashes, his company has issued a litany of statements reminding drivers of their responsibility when engaging Autopilot.
In March, the driver of a Model X died after his car steered into a highway divider in Mountain View, Calif. Tesla says the driver ignored repeated warnings — which it knows from the car’s computer logs — to retake control of the vehicle.
And a few weeks ago, post-crash data revealed that a woman driving her Model S south of Salt Lake City repeatedly engaged Autopilot and each time didn’t touch the wheel for many seconds. After one 80-second spell of neglecting the wheel — and by her own admission focusing on her phone — the driver slammed into a stopped fire truck at 60 mph and, somehow, only suffered a broken foot.
In response to questions about its technology, Tesla provided USA TODAY with previously issued statements about Autopilot and excerpts from its owner’s manuals.
Those statements include language warning that while Autopilot “is the most advanced driver assistance system on the road, it does not turn a Tesla into an autonomous vehicle and does not allow the driver to abdicate responsibility.”
The Tesla crashes recently prompted consumer groups to ask the Federal Trade Commission to investigate Tesla for having “consistently and deceptively hyped its technology,” said Consumer Watchdog’s John Simpson.
Some experts suggest that regulators might want to consider redrafting licensing rules when it comes to operating cars with driver-assist tech.
“With automation comes more responsibility,” says Bryan Reimer, research scientist with the Massachusetts Institute of Technology’s Advanced Vehicle Technology Consortium. “Every system out there now has benefits and limitations, and none work perfectly in all situations. We as a society have to understand the balancing point.”
Particularly concerning is what automakers call the handoff, that moment when a computer realizes it needs a human to take over. The Catch-22 at play is simple: the better the tech, the more the human will rely on it, making an emergency handoff harder for the brain to process quickly.
“The most dangerous part of this trend is the handoff, because humans will get lax and complacent if a computer seems like it can handle things on its own,” says Karl Brauer, executive publisher of Cox Automotive. “This in-between zone is risky, because it’s tough to plan for bad human behavior.”
But automakers do try. Approaches vary on how driver-assist tech is engineered to account for driver inattentiveness, ranging from visual and audio alerts to simply pulling the car over to a complete stop if there’s no input to the steering wheel.
For example, while Nissan’s new Rogue ad might have shown a woman’s hands hovering over the steering wheel to highlight the car’s lane-centering skills, ignoring the wheel for more than a few seconds triggers a sequence of events.
First, a visual warning appears on the dash, which then begins flashing. If no action is taken, a beeping starts that turns into a siren sound. If the driver still doesn’t respond, the car will tap the brakes, and then — if it is traveling at least 40 mph — will turn on the hazard lights and slow the car to a full stop.
Audi’s PreSense tech includes the usual array of driver-assist features, from lane keeping to adaptive cruise control. As with the Nissan, drivers are expected to provide steering inputs every 15 seconds or the system will first alert you, and then shut down.
No doubt the temptation to use the marvels of this new tech as a marketing tool is often overwhelming.
In its pitch to consumers on its website, Ford offers occasional small-type footnotes urging caution when using its latest Co-Pilot360 driver-assist tech — bowing this summer on the 2019 Ford Edge — amid glossy images spotlighting new features such as cruise-control with lane centering and evasive steering assist.
“It’s navigating the road ahead,” Ford says on its site.
But no matter what consumers think the tech is capable of, they should always assume they’re dealing with systems that are essentially automotive rookies, says Scott Lindstrom, driver assist technologies manager at Ford Motor.
“The analogy I use is that it’s like when I helped my daughter learn to drive. I may have been coaching her, but she was the one driving,” he says. “These systems are great, but they’re not all-powerful.”
Source: Tech CNBC
The ad shows: a car cruising on its own. The manual warns: driver, never stop paying attention