Driverless cars recognise dangers lying ahead


Computer vision researcher at KU Leuven thinks self-driving cars are no worse than humans at learning to respond to obstacles on the road

Hit or swerve?

Thanks to companies like Tesla and Google, self-driving technology is quite the buzz these days. But how likely are we to hand over the wheel to a computer anytime soon? One researcher thinks we’re getting there.

Originally from Lithuania, Jonas Kubilius has spent the past six years researching computer vision at the University of Leuven (KU Leuven). Together with colleagues Stefania Bracci and Hans Op de Beeck, he recently published a study that could prove of interest to the self-driving industry.

The researchers have found that the current image-recognition technology can distinguish unfamiliar shapes, on top of those that have been pre-configured. In other words, computers might be smarter than we give them credit for and can learn to respond to new situations like human beings would.

The study was published in the magazine PLoS Computational Biology. “Imagine you have trained your system to recognise bicycles and cows, but not Segways,” says Kubilius. “The biggest concern is that the system will make a random decision, or even crash, when it encounters one on the road.”

The researchers have discovered that current technology is already so advanced that it can judge what an unfamiliar object looks like, by noticing, for example, that a Segway has two wheels and drives slowly. They are convinced that the computer can be trained to avoid hitting them.

Almost human

In the same way, it can be taught to treat minor obstacles, like road-side debris, as negligible and drive over them, instead of carrying out a manoeuvre that could result in an accident.

Behind this ability are the so-called deep neural networks, or complex algorithms that perform computations in a fashion similar to the neurons in the human brain. Like the brain, these networks are multi-layered, but they only have a digital form.

Sure, they’ll still make mistakes, but their decisions will be as reasonable as ours

- Jonas Kubilius

Deep neural networks have existed since the 1980s, but were initially used for relatively simple tasks like checking postal codes on mail or handwritten amounts on cheques. They now deal with more complex matters, thanks to large amounts of available data, increased computing power and cheaper technology.

In his research, Kubilius compared human and deep neural networks in how they interpreted abstract images. In other words, he tested the capacities of the human and digital brains to intuitively determine what an unfamiliar shape reminds them of. The difference between human and machine has turned out to be relatively small.

A learning curve

Does this mean that computers will soon replace us in the driving seat? Not quite yet, says Kubilius, because the current image-recognition systems are nowhere near as powerful as the human visual perception, and they are not yet very reliable.

“But it’s worth investing in them because machines don’t get tired or distracted like we do, and those are the major causes of car accidents,” he says. “In the long run, self-driving cars can improve safety on the roads. Sure, they’ll still make mistakes, but their decisions will be as reasonable as ours.”

The applicability of Kubilius’ findings extends beyond self-driving cars. Smarter visual systems are also essential for household robots and advanced optical tools like Google Glass.

With the funding from the Flemish research foundation FWO and the European Commission, Kubilius was recently accepted as a post-doctoral researcher at the famous Massachusetts Institute of Technology (MIT), but will return to KU Leuven in 2018 to continue his research.

At MIT, he is focusing on improving the intelligence of deep neural networks used for image recognition. He is specifically training the digital brain to analyse images in a more complex way than just recognising what’s in them.

Instead of simply noticing a coffee cup in a picture, Kubilius explains, the computer should also be able to determine where the cup is located in relation to its environment and what colour or shape it is.

Photo courtesy KU Leuven