Five Technologies Shaping Photography and Filmmaking Today

June 14, 2016

By Greg Scoblete

The market for VR content, including video games, is expected to hit $8.3 billion in 2020, according to Futuresource Consulting.

From immersive photography to smarter drones, software and image stabilization, these are the tech trends shaping photography and filmmaking.


It may be hard to speak of a technology “maturing” even before it reaches anything resembling mainstream adoption, but such is the gold rush mentality of today’s tech sector that all the dollars pouring into virtual reality and 360-degree imaging have propelled the technology forward at a ferocious pace.

Filmmakers and photographers looking to create virtual reality or 360-degree content now have a bona fide ecosystem of products to choose from. There are, for instance, lower cost, two-dimensional cameras like Ricoh’s Theta S or Samsung’s Gear 360 that can record stills or video for sharing online or viewing on headsets like Google Cardboard. Then there are GoPro rigs that cleverly position multiple action cameras to get a fully spherical view of the scene. Finally, and most recently, is a new breed of “cinematic VR” camera released last year that not only records everything around it, but does so at a higher resolution and with more dimensionality than the aforementioned products. In this later camp are cameras from Lytro, OTOY and JauntVR.

As we reported in the December 2015 issue, it’s an open question whether virtual reality and 360-degree imaging really will catch on with the broader public, despite the proliferation of hardware to create and view this content. But there’s no question that the medium, and technology, are red hot right now.

DJI’s Phantom 4


Camera-equipped quadcopters certainly aren’t a new fad, but despite their remarkably rapid advance into the world of photography and filmmaking, piloting them is still a bit tricky, especially if you need to choreograph a complicated scene or keep pace with moving subjects.

The next wave of flying cameras focus less on the camera and more on the flying. They’re a lot smarter, flying and executing photo/video missions with much less pilot interaction. Drones such as DJI’s Phantom 4 (reviewed in “Product Reviews,” on page 160) and the compact Lily come with sensors and software that enable these craft to fly effectively on their own while tracking a moving object and avoiding obstacles. For action sports photographers, this means a drone that can keep pace with a rapidly moving athlete, freeing the operator to focus on composition.

Pond5’s AI-powered image recognition software.


One truism in the digital era is that we’re drowning in photographs. According to the research firm InfoTrends, the world snapped 1 trillion photos in 2015, up from 665 billion in 2013. We will capture another 1.14 trillion in 2016, if InfoTrends’ crystal ball is correct. Professional photographers have elaborate and meticulous organizing regimes for these images, but even then it can be difficult to manage and find old images tucked away in digital archives. Meanwhile, stock libraries and online image services like Flickr have to organize their ballooning libraries and make relevant images easily discoverable by potential buyers.

Enter Artificial Intelligence (AI). Thanks to advances in machine learning and image recognition, AI-powered software is increasingly playing the role of curator and archivist, poring through vast troves of photos, accurately identifying the contents and applying tags. Last year, services like Flickr, Eyefi and Lightroom (CC edition) all rolled out versions of auto-tagging that rely on sophisticated image recognition. This year, firms like Pond5 have followed suit. It’s not flawless. Flickr’s algorithm, for instance, had an unfortunate and racially tinged mistake when it launched. But the premise of machine learning is just that—the machine learns from its mistakes and improves over time.

In fact, Josh Haftel, senior product manager at Adobe, sees it as a step towards a future where AI-driven software apps work to alleviate some of the grunt work inherent in a creative workflow. In a conversation at the Consumer Electronics Show in January, Haftel speculated that Lightroom could eventually scan a new batch of images and suggest keepers based on both concrete qualities (is it in focus?) and possibly more subjective ones as well (is it beautiful?).

AI that passes esthetic judgment on your photography isn’t speculative. In the fall of last year, EyeEm revealed that its EyeVision algorithm had been updated to suggest tags on photos not simply by leveraging obvious visual cues (people, landscapes, etc.) but also by evaluating more abstract characteristics such as “carefree,” “exciting” or “sadness.”

Stabilization systems that leverage both sensor and lens are enabling greater degrees of shake correction, as illustrated here by Olympus.


When it comes to image stabilization, there have long been two camps among camera and lens makers—those who build it into the lens using optical elements that move in response to camera shake, and those who build it into the camera body, using a sensor that shifts to reduce blur. Both approaches have their virtues but a new method, embraced by both Olympus and Panasonic, combines the two for even better results.

Olympus’ new 300mm f/4 image stabilized lens, for instance, can work in tandem with either the OM-D E-M5 Mark II or OM-D E-M1’s shifting sensor to deliver up to 6 stops of image correction. Two of Panasonic’s newest mirrorless cameras (the GX8 and new GX85) support dual image stabilization as well with a larger assortment of Lumix lenses. We wouldn’t be surprised to see this technology spread to other manufacturers as well.



Cinema cameras and many interchangeable lens cameras used extensively for filmmaking often tout their wide dynamic ranges. Until recently, however, all those tonal subtleties were lost as films made their way into people’s living rooms and onto TVs that were unable to render high dynamic range content. Now, that’s changing. In late August of 2015, the Consumer Technology Association (CTA) released a set of standards for high dynamic range TVs, complementing the work of other standards organizations that have sought to improve the home viewing experience beyond just stuffing TVs with more pixels.

Without diving too deeply into the minutiae, the explanation is that new wave of HDR TVs hitting store shelves in 2016 bring improved color depth (10-bit); a wider color gamut (at least 90 percent of the DCI P3 color space) and better color sub-sampling (4:2:0 for compressed video) than older models—including older 4K sets. While many filmmakers have been told to invest in 4K recording gear to future-proof their work for higher resolution TVs, it’s equally important to create master files that will translate well on HDR sets. The mantra in the HDR era, then, is not just more pixels, but better pixels.

For independent filmmakers, particularly those using DSLRs and mirrorless cameras, this may entail a change in workflow. Most of those cameras only record 8-bit footage internally and require external recorders to hit 10-bit. Paying attention to camera color profiles is also going to be more important. Sony’s new a7S II, for instance, has a color profile (S-Gamut3.cine) that maps well to the DCI P3 color space.

See the story in the digital edition (for PDN subscribers; login required) 

Related: The Surprising Fate of Film

Product Review: DJI Inspire 1 Quadcopter

Virtual Reality Weddings: The Next Big Thing?