Interested in Contributing? Read this
Ever since self-driving cars became a hot topic of interest, methods of exploitation have inevitably crept in. There have been reports of vulnerabilities in the software. Demonstrations on possible ways of hijacking a car remotely and disabling certain features have also come to light.
However, recent studies show a simple way to confound a self-driving car’s algorithm. Small changes on a road sign can trigger misidentifying of signs and thus lead to accidents.
A team of researchers from the University of Washington demonstrated how one can use home-printed pieces of paper on road sites to trigger such an event. According to researchers, image recognition system used by most autonomous cars fail to read road sign boards when wholly or partially covered.
In a research paper, titled “Robust Physical-World Attacks on Machine Learning Models,” the researchers explain their findings. Simple exercises such as placing words above and below a STOP sign generated an incorrect output in all cases of study.
By adding “Love” and “Hate” graphics onto a STOP sign, the researchers were able to trick the autonomous car’s image-detecting algorithms into concluding it was a Speed Limit 45 sign.
The researchers also performed the same exact test on a RIGHT TURN sign and found that the cars wrongly classified it as a STOP sign two-thirds of the time.
Further, they applied small stickers on a STOP sign, which led the AI to believe they were all street art.
We [think] that given the similar appearance of warning signs, small perturbations are sufficient to confuse the classifier,” the researchers told Car and Driver. “In future work, we plan to explore this hypothesis with targeted classification attacks on other warning signs.
The minute sign alterations in all the experiments might not have affected humans, but in case of an AI, these very minute changes had profound effect. This is because an AI relies heavily on the visual data to generate results. It can turn fatal as these small alterations to the signs could result in cars skipping junctions and potentially crashing into one another.
The worst part is, as long as the scope of the AI is not broadened to include such anomalies, vulnerability remains. Any change inflicted on road signs, be it manual or due to some natural event, could potentially affect the a car’s judgment.
“Attacks like this are definitely a cause for concern in the self-driving-vehicle community,” said Tarek El-Gaaly, a senior research scientist at autonomous vehicle startup Voyage. “Their impact on autonomous driving systems has yet to be ascertained, but over time and with advancements in technology, they could become easier to replicate and adapt for malicious use.”