Stanford engineers design 4D camera for robots
[]
Engineers at Stanford University have developed a 4D camera that generates better images for robots to navigate the world.
The camera, which generates a four dimensional image and can capture nearly 140 degrees of information, are expected to be better than current options for close-up robotic vision and augmented reality, according to engineers involved in the project.
"Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess," explained Donald Dansereau, a postdoctoral fellow in electrical engineering. 
The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window.
"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering," Dansereau added.
That additional information comes from a type of photography called light field photography, first described in 1996 by Stanford professors Marc Levoy and Pat Hanrahan.
Two 138° light field panoramas and a depth estimate of the second panorama. /Stanford University Photo

Two 138° light field panoramas and a depth estimate of the second panorama. /Stanford University Photo

Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what's known as a 4D image.
A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction.
Robots might use this to see through rain and other things that could obscure their vision.
As technology stands now, robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects. 
This 4D camera could allow them to gather much of the same information in a single image. In addition, the researchers see this being used in autonomous vehicles and augmented and virtual reality technologies.
Assistant Professor Gordon Wetzstein (L) and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. /Stanford University Photo

Assistant Professor Gordon Wetzstein (L) and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. /Stanford University Photo

The camera system's wide field of view, detailed depth information and potential compact size are desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.
Although it can also work like a conventional camera at far distances, the 4D camera is designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars.
As part of an augmented or virtual reality system, the camera's depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.
"It could enable various types of artificially intelligent technology to understand how far away objects are, whether they're moving and what they've made of," said Wetzstein. "This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."
The researchers are planning to create a compact prototype next, which hopefully will be small enough and light enough to test on a robot.
(Source: Xinhua)
9554km