🔥 Burn Fat Fast. Discover How! 💪

Neural networks for selfies on Google Pixel 6: accurate alpha | Big Data Science

Neural networks for selfies on Google Pixel 6: accurate alpha matting in portrait mode
Matting an image is the process of extracting a precise alpha mask that separates the foreground and background objects of an image. This is not only necessary for professional designers when designing advertising photos, but has also become a popular entertainment for smartphone users. Send friends a selfie with the Eiffel Tower in the background while in a room with a grandmother's carpet? Easy with Google Pixel 6: a convolutional neural network from a sequence of encoder-decoder blocks to gradually evaluate high-quality alpha matting will preserve all the details, including fine hairs.
The input RGB image is combined with a coarse alpha matte (generated with a low resolution people segmenter) which is passed as input to the network. The new Portrait Matting model uses the MobileNetV3 backbone and a shallow decoder with few layers to first predict the low resolution advanced alpha mask. Then a shallow codec and a series of residual blocks are applied to process the high resolution image and the refined alpha mask from the previous step. The shallow codec relies more on lower level functions than the previous MobileNetV3 backbone, focusing on high resolution structural functions to predict the final transparency values for each pixel. This way the model can refine the original foreground alpha mask and accurately extract very fine details. This neural network architecture works effectively on Pixel 6 using Tensorflow Lite. The ML model also uses a variety of training datasets that cover a wide range of skin tones and hairstyles.
https://ai.googleblog.com/2022/01/accurate-alpha-matting-for-portrait.html