Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Adobe research creates AI tool for transferring image styles between photographs

Users of Adobe's image editing software may see a new creative option added to the tools in the future, thanks to artificial intelligence research conducted by Adobe and Cornell University that can make changes to a photograph by transferring the style and other elements from another source.

The paper for Deep Photo Style Transfer describes the use of deep learning methods to analyze elements of a reference photo, acquiring information on the time of day, colors, weather conditions, and other items, reports The Next Web. This style can then be applied to a second image, changing elements to make it similar to the first, such as editing a dusk cityscape to one that appears to be an image captured in the middle of the day.

The researchers based their work on earlier algorithms that used a painterly transfer via a neural network, in a similar way. A Neural Style algorithm was then used on a target image to apply the style, but introduced various distortions into the image, making it unsuitable for photorealistic style transfers.

The researchers fixed this by constraining the transformation to be locally affine in colorspace, and to apply it as a custom layer that can be further adjusted. This approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, according to the paper.

The recently-published paper does indicate Adobe is working hard to introduce artificial intelligence into its tools in the future, though it may not appear in Photoshop for a while, until Adobe can perfect the software's capabilities. In the meantime, code from the research has been made available to download in Github, so interested users can try out the tool for themselves.

Adobe's existing tools already use machine learning and AI in a limited capacity, announcing in November last year that it was introducing tools to automate tasks and provide extra assistance, powered by the Adobe Cloud Platform. For example, Adobe Sensei will help users in Stock Visual Search and Match Font, while the Liquify tool in Photoshop will be made face-aware.

The company has also teased the use of a voice-based assistant for basic image editing tasks. A proof of concept video showed a user cropping and flipping an image before posting it to Facebook, all by speaking to an iPad app.

Adobe believes it may be some time before such an assistant will be available for use by its customers.



12 Comments

randominternetperson 3101 comments · 8 Years

Pretty cool. Too bad they didn't use better examples though. I mean it's impressive that you can make a boring house look like it's been transported to an alien world with white grass and a brown sky, but how often is that called for. I'd rather see an example like described in the article "a dusk cityscape to one that appears to be an image captured in the middle of the day."

tmay 6456 comments · 11 Years

Pretty cool. Too bad they didn't use better examples though. I mean it's impressive that you can make a boring house look like it's been transported to an alien world with white grass and a brown sky, but how often is that called for. I'd rather see an example like described in the article "a dusk cityscape to one that appears to be an image captured in the middle of the day."

Infrared photography is an acquired taste, so a simulation of that is also. That said, being able to create a "sky" for a photo taken on a clear day would be a useful tool. Still, you would need to be able to simulate cloud shadows to best mimic a real photo, which this doesn't tool doesn't do. You would likely need a depth map in meta data of both the sky and landscape to be able to do that.

spice-boy 1450 comments · 8 Years

Am I the only one that thinks the "after" pictures are horrific?

polymnia 1080 comments · 15 Years

spice-boy said:
Am I the only one that thinks the "after" pictures are horrific?

I think you are. 

If you feel the Reference photos are horrific, you will likely hate the Result images. 

Its pretty safe to say that making a conventional image match a highly stylized image will work better than the other way around. 

Marvin 15355 comments · 18 Years

spice-boy said:
Am I the only one that thinks the "after" pictures are horrific?

They look close to the reference. This would be an extreme example of how much they can change the source to become like a given reference. In real-world cases, people would just want to be matching composited elements into a scene e.g you crop a person out and paste them in somewhere and need the lighting/shadowing and color temperature to match the surroundings or you have a series of images that you need to match as a group.

You'd be able to have the computer match portions of images to other images, something that would take hours to do manually. All these tools are just there to save time. They should apply AI to cropping too. Humans can easily see what an object is relative to a background but computers can't so humans have to slowly crop round the image. With object/shape recognition, the computer could do the job much more effectively.