الثلاثاء، 23 يونيو 2020

How Your Smartphone’s Video Camera Does Its Magic

We take for granted that each year new smartphones will offer improvements in video recording quality. They continue to chew through the market for standalone video cameras, from the bottom up. One obvious reason for that is improved hardware — better, higher-resolution, sensors and faster processors, for example. But a less obvious set of improvements is in the firmware and software algorithms that drive the camera and process the captured frames.

One company you’ve probably never heard of, Imint, is behind a lot of that nearly magical process. They supply video processing technology, in the form of their Vidhance product line, to most of the world’s largest smartphone makers. It’s embedded in hundreds of millions of devices, so it’s likely you’ve already been taking advantage of it. We had a chance to spend some time with Olof Björck, one of the Vidhance developers, and to use a pair of their demo phones and apps to dig deeper into what the company does and how it works.

Turbocharging Optical Image Stabilization for Video

Optical Stabilization (OIS) has proven effective as a way to minimize blur from slight camera motion in still images. Unfortunately, it is only a piece of the solution for video. First, the camera assembly can only move a limited amount, so at some point, it has to stop or reset if it needs to correct repeatedly in the same direction. This can cause judder in an image. Also, since the shifts implemented in smartphone OIS are typically only from side to side and up and down, additional lens distortion is created.

Vidhance provides an Electronic Image Stabilization (EIS) system that can work both with and without OIS hardware. When coupled with OIS, it receives the shift information, along with IMU data, from the phone and can use that as input to the EIS system as well as its distortion-correcting code. Even without OIS, Vidhance has some interesting tricks up its sleeve, such as an adaptive ISO so that shutter speeds are faster if there is more motion or less light. You can see it at work in this sample video provided by Imint:

Horizon Correction Stabilization

One of the fun things about having a demo rig is that I could enable or modify specific video features and see their effect, unlike with a standard phone where the OEM might have already wired them up in a particular way. For example, one of the stabilizer modes offered by Vidhance is horizon correction. It does what you’d expect, and tries to keep what it estimates as the horizon level while you record. I could see a visible difference in how stable shots were when filming while in a moving vehicle. Horizon correction would be particularly helpful for my travel videos shot from the bow of a boat, or from a drone. I fully expect that this technology is turned on in some cases by many of the phones (and maybe even drones) that I own, but it was neat to see specifically what it can do.

Multiple Cameras Also Present Unique Challenges for Recording Video

Like having OIS, having multiple cameras on a smartphone is a no-brainer advantage for still photography. However, for recording video, it can be a mixed blessing. If the phone switches between wide-angle, standard, and telephoto camera modules as a video is zoomed, there are plenty of potential issues. First, the cameras aren’t aligned identically, so there is an offset that needs to be corrected — even if they are perfectly calibrated. Next, they have different lenses with varying optical properties, so corrections need to be applied to ensure a consistent look. Also, they use different sensors, so tonal and color response won’t be identical. Vidhance’s Stabilizer technology addresses the first two of these, by performing an automatic calibration as it works, and fixing the alignment and lens distortion issues.

Video Benefits from Temporal Noise Reduction

There is one way where video has a natural advantage over capturing single still frames. It is possible to do additional noise reduction by comparing the contents of successive frames. In its simplest form, if the value of a pixel is slightly off in one frame from the value in the surrounding frames, it is likely to be noise and can be smoothed out. This technique, Temporal Noise Reduction (TNR) is so powerful that it is now even used in many still image cameras by capturing multiple frames for a single image. It’s now found in models ranging from RED at the high-end to most flagship smartphones. The drawback to TNR is that it requires alignment and pixel mapping between frames, so it has only become practical to perform in-camera as processing power has improved.

You can see the benefits of TNR in this sample clip. For example, compare the left and right sides of the door.

You can see the benefits of Temporal Noise Reduction (TNR) in this sample clip. For example, compare the left and right sides of the door.

So How Does a Phone OEM Incorporate These Tools?

For those of you who are developers, this is probably obvious, but for the curious, here’s a rough outline of the process of incorporating this or most other technologies into a smartphone. First, the OEM licenses a developer toolset (often called an SDK) that allows it to develop and simulate how Vidhance will perform in their device. Then, they build their software (either as software loaded at runtime or burnt into the firmware) using a set of pre-built libraries for the OS and chipset they are using. Once that’s done, a cycle of testing and iterative improvements through tuning typically takes place. For example, Imint explicitly markets services to help client OEMs improve their DxOMark Video scores.

OEMs can get Vidhance tools for their Android and Linux designs, pre-built for most popular chipsets

OEMs can get Vidhance tools for their Android and Linux designs, pre-built for most popular chipsets

Pros and Cons of Real-time Video Stabilization

On the surface, it seems pretty obvious that performing video stabilization with access to device metadata like sensor, lens, and IMU data, is the best approach. However, there is an argument that post-processing in the cloud can in many cases do a better job. First, the cloud has way more processing power, and second, it can use “future” frames to stabilize current frames, since it has access to the entire recording. That’s one reason that Google’s then-breakthrough video stabilization for YouTube was so impressive.

Overall, as phones have become more powerful, and software like Vidhance has done a better job of integrating phone metadata in realtime, the advantage continues to shift towards doing stabilization in the phone. And of course, you have access to the stabilized footage instantly for streaming or sharing, instead of having to wait for the cloud to process it.

Gimbals Still Matter

To put all these in-phone options in perspective, none of them eliminate the advantages of having a real gimbal. This is especially true if you’re filming from a moving platform like a boat or car. But gimbals add cost and complexity to your recording process, so it’s great that the industry keeps innovating on built-in solutions.

Now Read:



sourse ExtremeTechExtremeTech https://ift.tt/3fWeyw9

ليست هناك تعليقات:

إرسال تعليق