Gear


What You Should Know About Photography’s Computational Future

January 4, 2018

By Ross Rubin

Camera makers, such as Light, producer of the L16 camera, are borrowing the multi-lens concept from smartphones. They're creating products with processors that combine multiple captures into very high resolution or high dynamic range images.

The first few decades of digital photography saw the evolution of images from grainy, low-resolution photos to high-resolution photos—and more recently video—that can be used for virtually any assignment. More recently, though, the evolution of processors has enabled computational photography not only to enhance images as never before but to enable whole new media forms. In doing so, they have opened up new opportunities, as well as some new challenges, for professional photographers.

Smartphones have provided the most robust evidence of these profound changes. These omnipresent devices have notable disadvantages when it comes to lens choice and sensor size. However, like animals evolving (at warp speed) to compensate for shortcomings, vendors have been able to tap the rapid advances in processing power to enable remarkable imaging capabilities. For example, Google’s Pixel 2 smartphone begins taking pictures before users tap the shutter button and uses software operating behind the scenes to composite multiple photos for superior low-light clarity.

Other smartphones now include multiple lenses to capture the depth of an image to produce bokeh effects, backgrounds and even mimic the look of professional lighting as Apple’s latest iPhone can do with its “portrait lighting” feature. And a variety of accessories for phones from Motorola and others are enabling 360-degree image and video capture. Consumers’ growing familiarity with these technologies is bound to raise expectations for commissioned work and for the experiences media brands can offer.

For example, over the past few years, photographers have had to master DSLR video capabilities to meet demand, as mobile video consumption has exploded. In the drive for engagement, virtual reality video provides another dimension, allowing consumers multiple paths through the media. Cameras filling this niche range from prosumer offerings from Nikon and GoPro to systems costing tens of thousands of dollars from vendors such as Lytro.

But imbuing these new media with the esthetic sensibilities of pros will not come immediately. At a panel on advanced photography at PhotoPlus Expo in New York, Jim Malcolm from HumanEyes, makers of the Vuze VR camera, noted, “When you start shooting in virtual reality, it’s actually really complex to understand what’s happening in the scene and where your subjects are in relationship to the camera.”

Some photographers may find that computational photography opens up new opportunities in specific industries. Matterport, which provides camera equipment and cloud-based processing services, has made progress in verticals such as real estate. Using the Matterport camera, photographers can create immersive images that prospective buyers can navigate around. The camera can also develop floor plans based on information it collects during the capture process.

Camera makers are also borrowing the multi-lens concept from smartphones. Take Light, producer of the L16 camera. With a front that looks a bit like a black slice of Swiss cheese, the L16 combines 16 different cameras into one body, folding the optics to conserve space. Pressing the shutter fires the 10 most optimal of these cameras at once, and the processor intelligently combines what they capture to achieve very high resolution (above 50 megapixels) or high dynamic range images. And the L16, while thicker than most smartphones, can fit into a coat pocket.

Given the earliness of its approach and its $2,000 price tag though, Light is straddling a line between casual and advanced photographers. Rajiv Laroia, CTO and co-founder of Light, has noted that “you end up in no man’s land” trying to accommodate both market segments simultaneously. That prompted Light to consider offering two different apps to control its camera: one emphasizing ease of use and another laying out complete customizability. Even so, the camera’s reliance on a touchscreen back and easily smearable face will likely limit its appeal to pros.

The next frontier of photography may extend beyond two dimensions, because light field technology is able to capture the positional depth of different objects in an image. The technology’s original trick was enabling the change of focus after a photo was taken, a feature that has trickled down to smartphones. Today, though, companies such as Lytro and Otoy are building commercial devices that capture real-world details to prepare scenes for photorealistic mixed reality. Applications include entertainment, architecture
and visualization.

As with VR imaging, fully experiencing these immersive environments will require moving beyond today’s flat screens. One company pioneering light field technology on the display side is Avegant, which has demonstrated prototypes of a mixed reality headset due out next year. The technology allows the company to address multiple focal points in augmented reality so that one can view objects from a very close distance—cupping them with your hands, for example.

But back in the two-dimensional world, the shorter-term impact on pros may be increasingly smart cameras that, via machine learning and neural networks, become ever more savvy about such traditionally manipulated choices such as exposure and composition. Already, the most recent flagship smartphone from Huawei uses object recognition to automatically identify the optimal “scene mode” for a photo.

Longtime photo industry observer and PhotoShelter co-founder and chairman Allen Murabayashi has argued that clients, aided by AI-equipped cameras, will increasingly have the ability to create professional level work all by themselves. In such a world, what would set professionals apart would be their knowledge of what technology to use when, and the ability to handle all of them. “You might consider yourself an artist, but to your customers you are a service provider,” Murabayashi said at a recent PhotoPlus Expo panel.

The path to the sentient camera apocalypse, though, need not be a sad or impoverishing one. Malcolm asked rhetorically, “Why care?” He answered, “Because it’s a heck of a lot of fun. And if that doesn’t do it for you, changing a consumer’s point of view on it is a nice way to go. If you’re in business, chase the money.”


Ross Rubin, who moderated a panel on advanced photography trends at PhotoPlus Expo, is principal analyst at Reticle Research.

Want more PDN? Click here to sign up for our email newsletter and get the week’s top stories delivered straight to you.

Related:

What You Need to Know About Computational Photography

How Photography is Changing in the Age of Machine Learning

The Brains Behind Your Camera Gear

Vuze VR Camera Review