This New Technology Aims to Eliminate Blurry Photos

Prophesee Metavision Deblur technology example image. Woman wearing red dress.

A new advanced neuromorphic system designed to eliminate blurring in smartphone photos is now ready for production, using a mix of AI and existing processing power to freeze action in ways not seen on current smartphones.

Paris-based Prophesee partnered with Qualcomm to develop its Metavision Image Deblur technology through a sensor it co-developed with Sony, optimized for the Snapdragon 8 Gen 3 chipset. It will be the most common in Android smartphones throughout 2024. While smartphone cameras run on computation software tackling everything from low-light to noise reduction, freezing action has long been a challenge in mobile photography. The subject will either be entirely or partially out of focus, with results only worsening in dimmer lighting conditions.

Prophesee developed Metavision to eliminate blur through hardware and software, utilizing a sensor and AI-driven calculations to synchronize the phone’s frame-based RGB sensor with the “event-based” Metavision sensor. Think of this as blur cancellation more than blur reduction. PetaPixel spoke with the company to discover how this new technology pulls it off.

Prophesee Metavision deblur. Woman in tan dress leaning backwards.

How the Image Deblurring Works

Certain phones have special modes to address capturing motion, like “Action” or “Snapshot,” not to mention phones have been able to shoot in burst for many years. While it’s possible to freeze a subject in place using these tools, there are no guarantees, and several factors come into play, like the depth of the subject, movement speed, and lighting conditions. Even in quick bursts, several frames may be out of focus, especially when shooting handheld and trying to follow a moving subject.

But to freeze a subject, especially when there isn’t a lot of light, the phone’s camera ramps up ISO to compensate for the faster shutter speeds necessary in certain action scenes. It may also reduce exposure time with increased ISO to achieve a result. It’s made even more challenging when using a telephoto lens, which will have a smaller image sensor relative to the main wide lens on a typical phone camera module. You don’t have to pixel peep too much with images like that to recognize how noisy they can be.

The Metavision sensor works independently of other image sensors by only focusing on how pixels in a scene continuously change, regardless of speed. It embeds a logic core in every pixel to turn them into neurons that activate asynchronously based on the number of photons they sense. This activating pixel is what Prophesee calls an “event,” and the dynamics of a scene dictate the sensor’s ability to match them pixel for pixel. This way, the phone’s frame-based RGB camera syncs with the Metavision event-based sensor to ward off blur by filling in any gaps in the image in a matter of microseconds.

Prophesee Metavision smartphone technology deblurs mobile images. Woman in teal dress next to a wireframe of the woman's moving arms.

Prophesee sees this less as a band-aid to apply after capturing an image, and more as something applicable as a person snaps the photo. According to the company, if smartphone manufacturers build Metavision into the camera pipeline, the advantage would be that its sensor could identify how strong or fast motion is in the frame. The camera’s image sensor could consider that when interpreting what exposure, shutter speed, and ISO to apply. All of this would happen in the backend, and given the optimization with the Snapdragon chipset, it’s a possible outcome.

Metavision may also account for movement from the mobile photographer’s hands as well. However, Prophesee wouldn’t disclose any specifics on how image stabilization on the phone plays into all this. Company reps did say they are working with “technology partners and potential customers” to find out how stabilization and deep learning techniques can combine to perform complete scene stabilization — something they hope to have locked down by the end of the year.

Setting Expectations

Since Metavision is a sensor unto itself, smartphone makers would need to actually build the hardware into the phone at the factory, so despite some handsets already on the market running on the Snapdragon 8 Gen 3, they won’t be able to use the new technology. Considering how early phones are designed in a product cycle, it’s unclear what devices, if any, might have Prophesee’s tech included. If there are going to be any in the 2024 product roadmap, no one is saying.

Those potentially using Metavision must figure out how best to deploy it. It won’t be a one-size-fits-all approach for all the optics on a camera module, meaning phone brands will have to pick one of the lenses to optimize for Metavision because both the optics and resolution will have to match for the deblurring to work properly. Prophesee could accommodate any focal length on the phone, be it the regular wide, ultra-wide, or telephoto, but as of now, it wouldn’t be possible to apply it to all lenses at once.

There must be an overlapping field of view to maximize the area seen from both sensors. If the event-based camera sees much more than the RGB camera, there could be a loss in resolution and detail. In that respect, Metavision’s sensor and camera work similarly to how some phones’ Time-of-Flight (ToF) sensors were once tied to portrait features specific to certain lenses.

As for software computation, phones often employ bracketing to capture scenes with high dynamic range, even more so for those shot in low-light conditions. Prophesee says it extensively tested Metavision in lux values as low as 25 for handheld shooting, and that results achieved “very good deblur” when there was at least enough light to illuminate part of the scene.

Prophesee Metavision deblur technology example. Woman in teal dress moving her arms.

The company also tests Metavision in RAW, so the deblurring should work the same when capturing images in RAW on phones with Prophesee’s sensor on board. There is a bit of a catch here: the technology is designed to work with default resolutions phones offer. In short, 12 or 12.5 megapixels are the most common resolutions, usually through pixel binning, so results may differ if you take the same action shots in high-resolution JPEGs. Although it’s the same image sensor in that scenario, the current Metavision sensor stands at 1MP. As Prophesee explains, deblurring with best results requires that the difference in resolution between the two not be significantly higher. At this point, Metavision can deliver the best results with resolution ratios that max out in the 12-16x range.

Moving Images

For now, the focus is on still images, though work is underway at Prophesee to develop video support as well. Video poses different challenges, but particularly with low-light footage, it may be possible to generate additional frames from the motion information captured by the RGB and event cameras to upscale or “upframe” a clip from 30 to 60 frames per second.

Prophesee also doesn’t have an exclusive agreement with any phone brand, so it is more likely the technology would appear in different phones rather than only one. The company was mum on details over which brands it was talking to, and whether it would pursue licensing or collaboration deals directly with phone makers. All that’s clear is the image deblurring technology is ready to go for any phone that is at least running the latest Snapdragon chipset.


Image credits: Prophesee

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment