Google announced a new machine learning model that will take HDR image editing in Google Photos to the next level. This tech helps you to make detailed manipulations in HDR images while preserving their quality.
HDR image improvement is enabled by Google’s new machine learning model in Google Photos through upgrading standard dynamic range image pixels to HDR image pixels. In Pixel 8 and newer devices, our model is integrated into Google Photos as a backend, so no matter how many other adjustments are made to HDR images, a newly generated image stays HDR.
At the heart of our method is the estimation of the HDR image metadata values that are deleted after editing by using supervised machine learning on a set of test HDR images with all their metadata values.
HDR Burst or bracketed exposure methods such as the Pixel’s HDR+ burst photography help retain picture detail for images shot in varying brightness. Such images are saved in the above format for SDR viewing in SDR displays, and a range suitable for SDR of video and image formats is 8 bits deep below.
This dynamic range compression, which is also known as tone mapping, leads to a lessening of the number of levels of grays used in the creation of the image, and hence lessens the contrast of the image; this is contrast understood as the difference between the image’s least and greatest amount of brightness.
Ultra HDR format contains another file named Gain Map, It is another image that is log-encoded and placed side by side with the SDR image to demonstrate how much more brilliance is required for each SDR pixel to give the HDR version aimed.
For instance, using the Magic Eraser, one can remove an unwanted object from the image by applying image inpainting to fill up the ideal object feature space.Generally, for an HDR image, if the carved-out area was originally bright, then it is simply a fact that the matching region in the Gain Map was originally bright, too.
Magic Eraser builds the missing parts of SDR images. To achieve the same purpose for the Gain Map region removed in HDR image editing, Google developed another novel ML model. Based on the original SDR image, the edited SDR image, and the original image’s Gain Map, our model estimates a new Gain Map.
Google claimed training the model required the collection of several thousand HDR images: SDR images with Gain Maps and image metadata for High Dynamic Range rendering. It acquired various photographs with HDR+ burst photography pipeline and using the dynamic range compression algorithm of the current pixel camera, SDR and HDR image pairs were produced.
Using this data, and a dataset of random mask shapes, it was learned a lightweight image-to-image translation model, that predicts the Gain Map given the edited SDR image, and a masked version of the original Gain Map. It is even versatile to generate a Gain Map based only on an SDR image input – calibrating imagery for HDR devices.
The proposed Gain Map reconstruction model by Google is less than 1MB and operates at an interactive frame rate on mobile devices. It retains HDR rendering during machine learning-based image enhancement in Google Photos on Pixel 8 and later.
Most importantly, none of the ML editing features depend on log Gain custom editing – our model can use any effect, even those that need to be developed. Here are some examples of our Gain Map reconstruction in action.