The last year 2017, Google revealed an automatic camera device called “Google Clips”. The Google Clips camera was mainly designed for taking pictures of an object or frame until it recognizes as a perfect image.
The clip camera has been designed to taking a candid moment of people and pets by using the AI-Based machine. Over the weekend Google selling the camera cost at $249, and, it’s already “Not-Available” on Google’s product store.
How does the Google Clips camera know how to makes a fantastic and memorable photograph moment?
In a blog post, Josh Lovejoy, UX Designer for Google, clarified the method that his crew used to integrate “human-centred approach” and an “Artificial intelligence powered product”.
Google needs the clip camera to avoid the photos of the same objects or same frame and search some good ones. With human-centered machine learning, the Google’s clip camera is capable to learn and click images that are so meaningful to peoples.
In order to feed examples into the algorithms in the device, to identify the best and perfect images, Google called in professional photographers. Google employed a documentary filmmaker, a photojournalist, and a fine arts photographer to gather visual data to train the neural network powering the Clips camera.
Josh Lovejoy wrote, “Together, we began gathering footage from people on the team and trying to answer the question, what makes a memorable moment?”
Particularly, Google admits that training a camera like Clips can never be bug-free, regardless of how much data is provided to the camera. It will recognize a well-framed picture, and well-focused frame shot but it also misses some main events.
However, in the blog post, Lovejoy says, “But it’s precisely this fuzziness that makes ML so useful. It’s what helps us craft dramatically more robust and dynamic ‘if’ statements, where we can design something to the effect of “when something looks sort of like x, do y.”
The blog basically defines how the company’s UX engineers have been able to apply a new tool to embed human-centred design into projects like the Clips camera. In another blog post on Medium, Josh Lovejoy had explained the seven core principles behind human-centred machine learning.
It is also interesting to note that chief executive of Tesla, SpaceX, and SolarCity, Elon Musk, back in October had taken a jibe at Google clips camera saying, “This doesn’t even seem innocent.”