How Google Photos uses machine learning to create customized albums

The cloud-based photo app will now automatically create albums showing the 'best' photos from a particular event, such as a vacation, while identifying famous landmarks seen in a photo.

A frame from a YouTube video shows the Google Photos app's automatically-created albums of related photos, such as a family vacation. The app's technology selects the "best" photos from a group and identifies landmarks using machine learning tools that that "teach" a computer to identify objects from a photo.

Google via YouTube

March 24, 2016

The Google Photos app allows users to back up their photos from multiple devices in a single location, while also collecting pictures of the same people or objects into organized groups.

Now, it will do those tasks fully automatically, creating an album that collects photos taken during a specific period — such as a vacation — organized into an album of showing the “best” photos from the trip.

Those photos will also be tagged with well-known landmarks they depict and people that appear frequently, along together with a Google map that roughly depicts your journey.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

“It’ll also add maps to show how far you traveled and location pins to remember where you went—because it's not always easy to recall the late-night diner you hit on your road trip, or which campsite you pitched the tent in when arriving after dark,” the company said in a blog post on Tuesday.

After that, the albums can be further customized, with photos removed from the album, added captions or locations on the map.

In order to identify the “best” photos, Google Photos uses machine learning, a subset of artificial intelligence where a computer has been trained to “learn” to recognize images, including more subjective variables such as whether they are in focus or well-composed.

With Google Photos, this often means picking photos that feature a landmark as the “best” photo.

The company previously used its deep neural network of computers — which are designed to mimic the more abstract processes of how the human brain learns — to do tasks such as pick the highest quality thumbnails from a YouTube video automatically. Recently, the computers have been used by Google and other companies to translate languages, respond to emails, build detailed population maps, and even beat a world champion Go player.

Howard University hoped to make history. Now it’s ready for a different role.

The Google Photos update, which builds on options available the photo app’s “Stories” feature, can also identify landmarks without geotagging features enabled, a useful feature to identify a particular location that may have sat forgotten in a batch of photos.

“We can detect landmarks, we have 255,000 landmarks that we automatically recognize,” Google Photos product manager Francois de Halleux told Wired. “It’s a combination of both computer vision and geotags. Even without the geotags, we’d be able to recognize a landmark.”

The technology has become increasingly precise — it can distinguish between the mock-up Eiffel Tower in Las Vegas and the real thing, for example — though geotagging can help the system ensure a particular location is identified correctly.

In order to create the albums, Google Photos also determines how long a vacation lasted by examining how long a user has been away from home using information from a camera’s data or by analyzing information such as flight receipts identified by its Google Now virtual assistant.

But the company acknowledges that its photo-tagging features — which identify people based on how often they appear in a group of photos — could be considered creepy.

While the app’s Assistant will choose photos where people have their eyes open or are smiling, it doesn’t identify people by name automatically. Instead, users can tag a person by hand as “Mom” or “Grandpa,” in private tags that are used for organization purposes.

“We think it’s a way to get all the benefit of this face-grouping stuff without any of the creepiness or problems that might ensue from it,” Google Photos product lead David Lieb told Wired. “We think it’s the right place to be on that privacy spectrum.”

Google’s Stories feature — which could create montages of related photos — is being replaced by the new Albums feature for both manually created automatic collections of images.

The new updates are available on the Google Photos app for Android, Apple’s iOS and the web version. Tapping the “Assistant” tab at the bottom of the screen would bring up any new Albums the app has created.