For every leap forward in self-driving car technology, there seems to be one step back. The latest hack may be the most simple yet, but it could have dire consequences.
Researchers at the University of Washington have identified stickers are enough to confuse a self-driving car. In this case, the researchers placed an attack algorithm on a standard-issue stop sign. Autonomous cars feature cameras that house an object detector and a classifier. Assuming hackers can control the classifier, they can generate a new image from the stop sign. Here, the self-driving car interpreted a stop sign as a 45 mph speed limit sign.
Furthermore, even the smallest changes to the road signs can confuse the classifier and create dangerous consequences as the software works to identify a road marking. The stickers—printed on a standard-issue printer—feature the correct attack algorithms to muck up a self-driving car’s classifier, and the car was unable to correctly read the signage from various angles and distances. It’s the first time a hack has been carried out successfully from up to 40 feet away.
Not only stickers, but researchers also created an overlay for a stop sign. In this case, the sign looks worn or splotchy to the human eye, but the self-driving car once again interpreted it as a 45 mph speed limit sign. Scary stuff.
The consequences are very real. In a self-driving, connected-car world, a vehicle could blow through a stop sign by interpreting it as a speed limit sign. Or, on the contrary, the car could read a speed limit sign as a stop sign and apply the brakes at speed.
It’s discoveries like this that should keep automakers and developers wary and vigilant to ensure the technology is absolutely ready should individuals resort to malicious attacks.
Comments
wouldn’t all of that big data stored in the brain in the sky tell the car that 99.999999% of vehicles came to a stop(more or less) at that location?
with two conflicting pieces of information, i’d think the car would rely on big data more so than a computer vision algorithm.
It seems like the car’s computer always needs to be smart enough to identify things on its own without help from a central server so it can react correctly to special cases like road construction, hand signals from emergency personnel, crossing guards, etc. Maybe there will be an OnStar-like service that would receive images from the cars and have humans (maybe at the local level) help to interpret the markings.
I still wonder how they’ll prevent dishonest people from standing in the roadway to cause self driving cars to come to a stop, then pulling a gun and robbing them while they’re at a complete stop. Right now, drivers would have the option of not slowing down, driving around them, purposely ramming them, taking evasive action, etc. With computers involved, the robber would know for sure the car would follow his or her hand signals and safely slow to a stop to avoid hitting a pedestrian.
The other thing I keep wondering is how self-driving cars will react to small objects (nails, screws, sharp pieces of metal) in the road. I suspect flat tires on self driving cars will be far more common as the computer probably wouldn’t be able to avoid those sorts of small items.
“wouldn’t all of that big data stored in the brain in the sky tell the car that 99.999999% of vehicles came to a stop(more or less) at that location? ”
^This.
The car would have to be able to identify a stop sign, and then compare it against a lengthy history of other autonomous cars that also identified the same stop sign and that same intersection. If 1000 autonomous cars previously all identified the stop sign and stopped at the intersection, the 1001st autonomous car that identifies a defaced stop sign is either going to have to look for other signs or visual clues about the intersection, or check the database for more information about what kind of action the other 1000 autonomous cars did.
I don’t think any autonomous car would only rely on posted signs if it could also rely on the data from other autonomous cars that traveled the road previously. It would be too invaluable not to use that data to make an informed decision.
who needs the stop sign if the car knows its gps position and knows that 10000 other cars came to a stop at that position?
The stop signs will still be needed for redundancy. The more given data the system would have (visual cues, past data from other cars, GPS reference points, etc.), the less likely the car would treat the sign at a posted speed limit. Getting rid of stop signs is a possibility, but at this early stage, the more data the car can visually detect and respond to, the more informed it can be.
You can’t rely on GPS positioning alone, there are multiple ways it may fail: for example, the GPS signal outage caused by issues with GPS satellites or someone using GPS signal spoofing system which makes the GPS system in your car or other device think that you’re in some other location (yes, those are available, yes, they are illegal and NO, that will NOT stop someone from using it for malicious purposes) or GPS jamming systems (also widely available, also illegal but will also be used by some idiots if the autonomous driving systems would start relying on it too much).
Not to mention about the possibility of hackers (or whatever you may prefer to call them) to gain access to the system which provides your car with data about behavior of other cars around you (such as “Car-2-Car”) and removing (or moving it to different intersection) the location where “10000 other cars came to a stop”. Or a bunch of other idiots deciding on a mass “action” where they’d intentionally stop at certain intersection WITHOUT the “Stop” sign to cause your autonomous car to do the same, or intentionally driving through the intersection with “Stop” sign without stopping, again to cause your autonomous car to “learn” such improper behavior and do the same.
So long story short: optical sensors are “A MUST” on autonomous driving cars since you can NEVER rely on GPS positioning or data sourced from other drivers around you. And, as this story shows, the optical sensors are also pretty easy to abuse. Meaning that fully autonomous driving for EVERYONE is still far, far away.
This is further proof that autonomous systems need further development and taught to stop via a human driver even if the car’s sensor cannot see the sign because of the stickers; thus, with real and computer simulation, the autonomous driving system will learn to be perfect because if it isn’t.. it will never get a driver’s license.
It seems like a minor problem. There is a major problem getting humans to recognize stop signs and red lights. Maybe work on The human problem first.
So what happens when there’s snow on a sign, or covering the roads? I think fully autonomous cars need to have a steering wheel and gas/break pedals because sometimes you have to make a split second decision to avoid an accident
As mentioned before, the car would rely on data from other autonomous cars that stopped at the same intersection in the past. It doesn’t necessarily have to see the stop sign to know that there is an intersection there. With accurate mapping, the car can predictively anticipate and even know where signs and painted lines are without having to see it, in the exact same way that people on a snow covered 2 lane road know that there is a yellow line somewhere. We may not know the exact location, but we know from memory roughly where it is. An autonomous car with V2V technology would do the same thing.