NPUs make it possible to perform AI tasks much faster than traditional processors
A processor dedicated to AI
On the market since 2017, Neural Processing Units (NPU) are AI dedicated chips that coexist on the same circuit as a processor, memory, a graphics chip, wireless communication modules, and sometimes sensors. Designed to speed-up artificial neural networks, NPUs make it possible to perform AI tasks much faster than traditional processors, with reduced energy consumption and without going via the cloud. Calculations are performed within the telephone, thus offering increased security as well as time saving.
This enables integration of highly interesting photo features, such as semantic image segmentation, that is intelligent recognition of the elements of an image (which can then be enhanced by applying specific settings to each of them).
Today, NPUs are equipping the most recent star devices of major manufacturers.
Seeing in the dark thanks to machine learning
Introduced in 2018 in the Pixel 3 camera, Google’s Night Sight mode enables it to adapt to the various night lighting conditions and obtain a natural result, without a flash or a tripod, thanks to machine learning. In practice, when the Pixel is stable and there is slight movement in the scene, Night Sight increases exposure time so as to capture as much light as possible and to limit “noise”.
If the phone or subject move, it shortens exposure time and takes several shots that are dark but sharp, which it then combines to recreate a single bright and sharp photo.
Rescuing bad shots, AI in post-production
If, despite all these tools, the photographer still takes a bad shot, they can use editing software that is based on machine learning. For several years, the Skylum company has been democratising photo editing based on AI. Its Luminar software tools simplify complex and laborious tasks (such as skin beautification or changing the sky), whereas Photolemur automatically enhances images, without any manual commands.
Although researchers have been working on super-resolution algorithm techniques for several years, some solutions, such as Let’s Enhance or the ML Super Resolution feature in Pixelmator Pro, are now publicly available.
Most of these techniques rely on the same principle: a deep convolutional neural network (a type of artificial neural network used in image recognition and processing, designed especially for analysing pixels) is trained from a data set, enabling it to learn the specificities of different types of objects.
When it is given a low-resolution image, it is capable of predicting and adding the extra pixels necessary to create an image with a better resolution.
More specialised, Sharpen AI software improves photo sharpness by acting on three types of blur: camera shake blur, movement blur, and the blur linked to a lack of lens sharpness.
Using a deep learning algorithm trained on millions of images, it detects blurred areas and is able to reconstruct missing pixels so as to recreate the information, all the while limiting artefact creation and noise increase in the rest of the image.
AI against fake content
If AI makes it possible to obtain more and more realistic altered images, one may then ponder on its ability to detect modifications made in a context of fake content multiplication.
Adobe researchers, in partnership with a research team from UC Berkeley, have used a convolutional neural network to detect alterations made with Photoshop’s Fluidity filter (with a success rate of 99 %, against 53 % for the human eye) and restore the image to its original state.
On the other side of the United States, researchers at NYU Tandon have developed an experimental technique, usable by the smartphone’s camera, enabling authentication of a photo or a video without deteriorating image quality. Thanks to an artificial neural network, they introduced artefacts into the image, creating a sort of “forgery-proof” digital signature that survives image post-processing.