A Multi Topics Tech Blog that Make us Tech Geek

Google Open-Sources AI Tech Behind Pixel 2’s Portrait Mode

This post was originally published on this site

Arguably, one of the best features on the Google Pixel 2 and 2 XL is Google’s implementation of ‘Portrait mode’ and now the technology which underpins that feature has been made open source through the company’s TensorFlow AI framework. Google made the announcement on Monday via its Research Blog noting how it hopes others will be able to use the feature to further improve on, and make better use of the technology through the creation of new use cases.

One of the reasons Google’s Portrait mode has proved so popular is due to the fact that it does not rely on a secondary camera to create the bokeh-like effect. After all, Portrait mode in itself, is not that uncommon now with multiple smartphones from many different manufacturers offering a form of the feature. The difference being, the other implementations rely on the more typical use of dual camera setups. Adding to that, as Google’s version is happening at the software level, it is arguably capable of producing even better results than the more hardware-focused alternatives. As part of the announcement on the open-sourcing of “DeepLab-v3+” Google also explained in a little more detail how the technology achieves the Portrait mode levels it does.

While machine learning is at the heart of the ‘magic’ Google explains its version of ‘semantic image segmentation’ is the real key – what is now open source. Semantic segmentation in general refers to the breaking down of something not just into parts, but parts that are grouped based on some form of meaning. While the term often refers to the process applied to images, Google explains its use of semantic image segmentation not only utilizes the same meaningful and segmented breaking down, but does so at the pixel level with artificial intelligence (AI) doing most of the heavy lifting. Therefore each pixel is identified by the software, attributed to a meaningful label (chair, kettle, person, cat, etc) and then the image is essentially reconstructed with each collection of semantically-grouped pixels either in focus or blurred. In other words, the Pixel 2 and 2 XL do not just identify the subject and blur what is around it, but identifies each pixel of the subject. The added benefit of this approach is the grouping does not have to be at the individual level, as depicted in the image below which highlights how a ‘group of people’ can also be considered the singular subject when it comes to background-blurring.

TRENDING POST :  Dual VoLTE, Android Oreo Are the Key Differences Between the Redmi Note 5 in China and India

Gurjit Singh is Microsoft Certified IT Professional. He likes to write about Computer Network, WordPress, Blogging Tips, SEO, Make Money Online, Computer Tips and Creating Tech Tutorials.
Thanks For Visiting YouMeGeeK.CoM

Page 1 of 11

Leave a Reply


Alexa Certified Traffic Ranking for YouMeGeek.com

Website Security Test

Website Security Test

  • 207
  • 2,266
Revive Old Post – Auto Post to Social Media

Tech Blogging Ideas!

Receive top education news, Blogging lesson WordPress, ideas Make Money Online Tips
and more!