Search engine and software giant Google, a has open-sourced its artificial intelligence-based ‘Semantic Image Segmentation’ technology – a technology used in the Pixel 2 and Pixel 2 XL portrait mode to achieve shallow depth-of-field effect without the need for a secondary camera.
“Today, we are excited to announce the open-source release of our latest and best-performing semantic image segmentation
model, DeepLab-v3+, implemented in Tensorflow. This release includes DeepLab-v3+
models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our Tensorflow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks,” stated a blog post shared by Google.
In the blog post, Google
explained what the technology was and how it worked to achieve the result. According to the post, the semantic image segmentation
assigns a semantic label, such as road, sky, person, dog, etc, to every pixel in an image. These labels then pinpoint the outline of objects, and thus impose much stricter localisation accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.
Semantic image segmentation
At a time when most smartphone manufacturers are moving towards dual-camera lenses for enhanced imaging, Google’s flagship smartphones – the Pixel 2 and Pixel 2 XL -- took a radical move to stick with a single 12.2-megapixel sensor, powered by algorithms that utilise the company’s semantic image segmentation
technology. The technology allowed Google
flagships to perform on-par, or even better, in comparison with other premium dual-camera-based smartphones.
With this public release, the search engine giant hopes to make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.