Gear


What Computational Photography Looks Like

June 13, 2018

By Greg Scoblete

Pexels CC

The term “computational photography” is much in vogue these days as smartphone makers use algorithms and growing computing power to overcome the physical limitations of their tiny sensors and lenses. But sometimes it’s hard to appreciate just what’s happening when phones “compute” images.

The short answer: math!

The long answer comes courtesy of a new paper Google has just published on arxiv (a repository for academic papers on physics, computer science and other subjects I wasn’t smart enough to major in) that explains in great, technical detail one popular example of computational photography: artificially creating a shallow depth of field.

Here’s a taste:

But seriously, if you’re technically inclined, Google’s paper is an interesting exploration of how to artificially simulate depth of field (and it’s not so full of equations as to be unreadable). The paper’s conclusion points to the power of this technique for future cameras:

“One extension to this work is to expand the range of subjects that can be segmented to pets, food and other objects people photograph. In addition, since the rendering is already non-photorealistic, and its purpose is to draw attention to the subject rather than to realistically simulate depth-of-field, it would be interesting to explore alternative ways of separating foreground and background, such as desaturation or stylization. In general, as users accept that computational photography loosens the bonds tieing us to strict realism, a world of creative image-making awaits us on the other side.”

It should be an interesting ride.

Don’t Miss: What Photographers Need to Know About Computational Photography