How many of those photos, pondered Machine Perception software engineers at Google Research, could be fostered to unsuspecting viewers as coming from professional landscape photographers? And, can they devise a Machine Learning (ML) algorithm to automatically select (from the hundreds of thousands of Google Street View photos) landscape photos which viewers would find impressive?
To explore how ML can learn subjective concepts, they devised an experimental deep-learning system for artistic content creation which automatically analysed about 40,000 landscape panoramas from Google Street View (from places like Alps, Banff and Jasper National Parks in Canada; Big Sur in California and Yellowstone National Park in the USA) and searched for what it considered the best composition. It then post-processed the selected photos to create “an aesthetically pleasing image.” The results were placed before professional photographers who, during a “Turing-test”-like experiment, found some rather impressive photos, some even approaching professional quality. Since this is a learning algorithm, it relied on “trained aesthetic filters” to let it “learn” about good level of saturation, HDR detail and composition.
You can view a compilation of selected photos in this showcase. If you see a photo you like, click on it to bring out a nearby Street View panorama. Would you make the same decisions (of composition, light, exposure) if you were there holding the camera at that moment? There was a person (carrying a camera in a backpack or car, bike, boat) that pointed a Google Street View camera at the very scene you are looking at. Perhaps, they’ll need to be told to go out during the “golden hour?” Just kidding. The resuIts are pretty impressive, and will get even better as the machine learns, perhaps eventually taking better photos than our pros? Artificial Intelligence, you say?