Photography News

How Photography is Changing in the Age of Machine Learning

June 5, 2017

By Greg Scoblete

You’ve just returned from a shoot with a card full of images. You load them into your card reader and step away for some coffee. When you get back, you’re presented with your best 20. These 20 aren’t just the images that you properly exposed and had in focus—though they are properly exposed and in focus. They are your “best” images as you, subjectively, define that term. They are the pictures you would have selected, if you hadn’t been too busy wrestling with the coffeepot.

As you sit at your monitor, reviewing your 20 selects and watching the steam waft from your cup, you nod approvingly as they’re given a nice first-pass retouching. Not by you, of course, you’re too busy blowing the steam off your coffee and scrolling through your newsfeed. You may not have edited the photos yourself, but they’re your edits anyway. Your software knows how you like your skin tones, how much sharpening you apply to your landscapes, your tolerance for detail loss during noise reduction and how to handle areas of high contrast.

While you were busy commenting on that inane thing someone posted to your newsfeed, your 20 selects are accurately keyworded and tagged. They’re backed up, too. The one image most likely to trend on Instagram has been uploaded, complete with the most promising hashtags (yes, those are still around). The images that require more detailed, manual retouching have been queued up in a folder. As you finish your coffee, your phone rings. It’s a client who wants to grab lunch in ten minutes. Of course you’re free. You stand up and turn your back on a computer you’ve never actually touched.

This is one possible future of the photographic workflow, a workflow enabled by powerful artificial intelligence technologies only now beginning to flex their muscles.

Adobe Stock can automatically apply tags to images thanks to machine vision and neural networks.

A Machine, Learning

Artificial intelligence (AI) is really a catch-all term that describes several interconnected software developments. There is neural networking, which is a computation platform that mimics the human brain’s interwoven tangle of neurons to process data. There’s machine learning, which is a process of iterative improvement driven by the software itself over time. And there’s machine vision: the ability of software to identify the contents of an image. Together, these AI technologies have been unleashed on all sorts of industries, and photography is no exception.   

“We have done a lot of research into making image editing tasks more automated,” says Jon Brandt, Senior Principle Scientist at Adobe’s Imagination Lab. The company is using neural networks to power editing features such as content-aware cropping and face-aware liquefy in Photoshop and automatic image tagging in Lightroom. Drawing on the library of images uploaded to Adobe Stock and Behance, the company is training deep learning algorithms to recognize the contents of images. (If you actually read Adobe’s terms of service, you’d know you were part of this science experiment.)

For Adobe, AI is tasked with fixing “low-level problems” like image blur alongside “high-level photo curation tasks” like sorting through images to find the best ones, Brandt adds. Deep neural networks have helped Adobe more efficiently extract meaningful information from images. “The more we understand what’s in the image, the better we can work with it,” he says. Brandt sees AI as a way to help automate the “mundane” aspects of a photographer’s workflow. But, “if you love working in Photoshop, we won’t change that.”

The stock service-cum-photo community EyeEm is using AI to help pore through its collection of millions of images to find those that are commercially viable and esthetically pleasing, says Ramzi Rizk, EyeEm’s Chief Technology Officer. In part, that involves accurately tagging and categorizing images in its database, so that they’re easily searchable. But it also involves a more subjective judgement call, as to whether an image is esthetically worthy of a high search ranking. “In the very early days, we used very simple data, like location and EXIF metadata,” alongside human curators to organize the EyeEm collection, Rizk says. “As deep learning developed, we decided we needed to focus there.”

EyeEm’s algorithm, dubbed EyeEm Vision, is trained using a combination of data generated by its photo service and by human curators. One goal, Rizk says, is to find photographs that might not be popular on the service but that, by virtue of their esthetic merits, deserve to be discovered. At the end of last year, EyeEm devoted an issue of its magazine to an intriguing experiment, pitting the algorithm’s photo choices against those of its community and expert photo curators. Some of those results are republished here. Over time, Rizk sees EyeEm Vision becoming smart enough to learn the individual preferences of a user or a brand searching for images, so that search results are even more targeted.

“People will say, ‘Well what if you miss a photo in machine curation?’” Rizk says. “But human curators are always missing things.”

Such intelligent curation isn’t limited to stock services sitting on tens of millions of images. Picturio recently introduced software that aims to free photographers from the grunt work of image culling. Using a neural network and computer vision, it scans through RAW images, analyzes their contents, groups similar photos together and recommends the keepers. The criteria by which this decision is made is a combination of objective factors such as histogram data, compositional elements and focus as well as some “secret sauce” judgement calls, says company co-founder Daniel Szollosi.

Using machine learning, the Picturio system makes note of how the human on the other end of the software’s judgement responds: If the photographer likes the recommended keepers, the software knows it’s on the right track. If the photographer rejects the selects, Picturio’s algorithm incorporates that information to help make course corrections. Over time, Picturio gets better at learning which photos you’ll want to keep, and which you’ll want to discard, Szollosi says.

This process of perpetual machine improvement raises an important question: what happens when the machine is smarter than the human? It’s not an academic question. The list of human talents now performed better by a machine is growing at an unprecedented pace. “Many people say this is just a paradigm shift that will actually lead to more job creation,” Rizk says. “I’m not totally sure
I ascribe to that.”

A Post Photographer Future?

You’re a creative director looking for images for your next campaign. The thought of calling a photographer briefly crosses your mind, but really, when was the last time you actually spoke to one? Instead, you open a browser window and describe your vision for the photograph to a service that produces it, pixel by pixel, from scratch. It looks like a photo, but no human ever shot it. It’s ready in a few minutes and didn’t cost you much, since your subscription to this particular photo service gives you 100 such images a month. If you’re unhappy with the photo, not to worry: A few more descriptions and tweaks nudge the final photo into shape.

This photo will be a big hit with your followers on social media, by the way. You know that, because you made sure to ask the machine to make it on-trend.

This future, too, is possible given the trend lines in the field, says Adobe’s Brandt. Indeed, AI has already proven that it can write songs and (gulp) articles from scratch. The same algorithms that can identify pixels and understand esthetics can be leveraged to build images without requiring a photographer at all. An enterprising developer recently trained Google’s open source TensorFlow AI software to create—what else?—cat photos using nothing more than extremely crude hand-drawn images.

But while Brandt sees AI as growing powerful enough to create photos without an actual photographer, he hastens to add that it’s not necessarily a trend that photographers should fear. “What we’re developing here are tools that are going to amplify and extend the expressive power of our creative community,” he says. “We’re going to see whole new areas of artistic expression unlocked by this technology.”

Rizk says that a photographer’s creativity will continue to be a differentiator even as machines grow more powerful. Machines can match trends, but can’t anticipate the next one, let alone start it, he says. “If you’re creating beautiful imagery, a machine will not replace you.” Especially in the near term, it’s still difficult for algorithms to parse emotionally descriptive phrases. “Software can create a photo of a banana on a white background, but it won’t be able to give you a sad family vacation at the beach,” he says. Besides, Rizk adds, if machines get good enough to take over the world, “Photos will be the least of our worries.”

This article is part of a larger series of trends and challenges in the photo industry. To read more articles in the series, check out The Ups/Downs of the Year Past and Year Ahead.

Related Articles:

This Is What Art Looks Like When Created by an Algorithm

How Adobe Will Improve the Selfie (with AI)