Ghostly 3-D scans of London aim to show 'inner life' of a self-driving car
London design studio ScanLAB produced a video that uses scanning technology to show what self-driving cars 'see' as they navigate city streets.
ScanLAB Projects/The New York Times Magazine
3-D scanning has been used to uncover ancient architectural ruins buried deep below the ground and to make animated models — often of human subjects — for countless movies and TV shows.
Now, one London design studio wants to merge the two — using 3-D scanning technology to look closely at what self-driving cars may “see” as they become more common on city streets.
One October afternoon, architectural designers from ScanLAB Projects drove a specially-equipped Honda CR-V through the streets of London. They wanted to use the scanning technology’s imperfections — its difficulties seeing through fog, its tendency to be fooled by reflective surfaces — to capture the “inner life” of an autonomous car.
Their purpose was mainly artistic, with the resulting video showing a ghostly city filled with both precisely geometric buildings — the Houses of Parliament stand out — and blurry renderings of more natural elements.
A city bus, scanned repeatedly as it waits on a street corner, appears to take up an entire block. A group of pedestrians appears nearly frozen in time as they cross the street.
The design studio’s goal was to capture what self-driving cars may see only accidentally while cruising city streets. It's “the peripheral vision of driverless vehicles,’’ as William Trossell, an architectural designer from ScanLAB, told the New York Times Magazine, which collaborated on the video with the designers.
The designers’ work also reveals the challenges companies such as Google, BMW, Ford, and others face while testing self-driving cars, the Times’ Geoff Manaugh notes.
Their self-driving cars use a laser-scanning technology called lidar (a portmanteau of “light” and “radar”), which sends out tiny bursts of light that are invisible to the human eye, nearly a million per second, that bounce off buildings, people, and objects in the area.
It produces extremely detailed renderings of the resulting environment, down to the millimeter, making it particularly useful for architects. (ScanLAB previously collaborated with the BBC, PBS, universities, and museums to produce detailed renderings looking inside environments such as military bunkers after D-Day during World War II and beneath ancient Roman ruins.)
But lidar scanning has some limitations, particularly around mirrored surfaces, the Times notes. Some situations may be strange and unlikely, but they are also difficult for a computer to understand.
Consider the example of someone wearing a T-shirt with a stop sign printed on it, Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, told the Times.
"If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign.”
The chances of that actually occurring are highly unlikely, Professor Nourbakhsh told the paper, “but the problem is we will have millions of these cars. The very unlikely will happen all the time.’’
One cyclist in Austin, Texas, also discovered that Google’s cars can have difficulty distinguished common movements, which can lead to unusual “learning” situations.
The self-driving Lexus got to a four-way stop just before the cyclist did, he wrote on a biking forum. Because the car had the right of way, the cyclist stopped and waited, doing a track-stand to maintain his balance.
The car remained stopped, forcing him to keep balancing as he waited for it to react. Then it did something unusual — it began mirroring the biker’s movements, stopping and starting, as if seemingly paralyzed in fear of what he would do.
“The two guys inside were laughing and punching stuff into a laptop, I guess trying to modify some code to 'teach' the car something about how to deal with the situation,” he wrote.
Despite describing the situation as strange, he noted that he felt safer with the self-driving car — Google has often said they’ve taught the cars to be extra careful — than with a human driver.
Making use of those imperfections was the goal of the designers at ScanLAB, who told the Times they had turned off some of the scanning technology’s corrective algorithms — such as a tendency to truncate long periods where a car is stuck in traffic — for aesthetic reasons.
But while the videos now show what Mr. Trossell of ScanLAB calls “mad machine hallucinations,’’ it’s also possible these images can provide a glimpse into what goes on inside the minds of self-driving cars as they learn to navigate a new environment that may increasingly change to accommodate their existence.