The L16 Camera’s Computational Photography

l16-Light-Camera

The L16 camera is the latest consumer-accessible version of a scientific imaging technique called computational photography. Now to some extent, all digital photography is computational photography. For example, all CMOS sensors that use a Bayer color filter require computational models to interpolate what the color values are in each of the pixels that they capture.

But what the L16 does is much more complicated. Instead of a recording the photons collected and focused by a single lens, it has 16 sensors and 16 lenses. There are some significant advantages to this. First of all, the total surface area of the sensor array can be increased by adding more small lenses, instead of increasing the size and weight of a single big lens.

This is not a new idea, in fact this is exactly how an insect’s compound eye works: many tiny lenses capture many images that the insect’s brain instantly comprehends as one image. The L16 does this a bit slower, taking the data from 16 different sensors and 16 different lenses, and assembling it into a single image that should have greater detail, less noise, and some additional data that a single sensor could not have captured on its own.

Obviously, this involves some pretty intensive computation. When I first read about this camera I was excited, but skeptical. After all, the Lytro camera was a technically successful but practically awkward camera that consumers have really had no use for it. Will this camera be different? It’s can’t capture the same high-quality image as a large single sensor/single lens camera, but can it properly simulate that image? This is technically possible, but practically very difficult.

My old digital video buddy Stu Maschwitz was also skeptical, but he traveled to the Light offices and examined the L16 in detail. He is much more positive about the camera after examining it than he was after watching the pitch video. Stu has many years of experience as a digital cinema pioneer, discerning photographer, compositor and software developer with Red Giant. I trust his judgment.

Yesterday, Light announced it they had raised an additional 30 million dollars from Google Ventures, so they now have the resources to continue developing a compelling computational model, even if their first product has been delayed. Personally, I am pretty excited about this technology. As a videographer, I plan on sticking to conventional camera systems for work for a while, but this type of computational Imaging has some considerable advantages in other fields. I’m planning on conducting my own experiments with camera arrays for some future projects.