;
;

Pixel binning technology helps Galaxy S20 Ultra to take amazing photos in the dark

 Megapixels used to be so much more straightforward: a higher number meant that the camera could catch more video detail as long as the picture had enough light. The concept called pixel binning, which is now creeping through flagship smartphones, is changing the traditional imaging guidelines for the better.

In brief, pixel binning technology allows you a camera that shows a lot of info when it’s light without being worthless when it’s dark. Nevertheless, the necessary hardware improvements carry some tradeoffs and fascinating data, which is why we take a closer look.

You may have seen pixel binning on earlier smartphones like LG G7 ThinQ in 2018 and new LG V60 ThinQ, but this year it’s finally catching up. Samsung, the largest Android phone company, has installed a pixel binning on its flagship Galaxy S20 Ultra. Other high-end phones unveiled last week, Huawei’s P40 Pro and Pro+ and Xiaomi’s Mi 10 and Mi 10 Pro also offer binning pixels.

pixel binning pixel binning technology pixel binning technology s20 ultra
XDA Developers

Samsung Galaxy S20 Ultra Succeeds in new Durability Test

What is pixel binning technology?

Pixel binning technology is designed to adjust an image sensor more to various conditions. On the latest phones of today, it includes improvements to the image sensor that first gathers the light, and to the image processing algorithms that turn the raw data of the sensor into a picture or video. Pixel binning combines the sensor data from groups of pixels to produce, in turn, a smaller number of pixels of higher quality.

When there is plenty of light, in the case of the Galaxy S20 Ultra, you can take images at an actual resolution of an image sensor — 108 megapixels. But when it’s dark, pixel binning lets a phone take a fine, albeit lower-resolution picture, which is the 12-megapixel resolution still useful for the S20 Ultra that has prevailed on mobile phones for a few years now. LG Product manager in the United States Ian Huang says, “We wanted to give the flexibility of having a large number of pixels as well as having large pixel size,”

Is the pixel binning a marketing ploy?

Off course no, It helps phone makers to talk about megapixel numbers that surpass what you can see even on DSLRs and other cameras of professional quality. That’s a little dumb, but if you want to get the most out of your smartphone’s camera, pixel binning technology will offer some real benefits.

How does the pixel binning technology work?

To fully understand the pixel binning of, you need to realize what the image sensor for a digital camera looks like. It is a silicon chip with a grid of millions of pixels capturing the light coming through the camera lens. Each pixel only records one colour: red, green or blue. But the colours are arranged in a particular checkerboard structure called a Bayer pattern that allows a digital camera to reproduce all three-colour values for each pixel.

Pixel binning blends data on the image sensor from several tiny pixels into one bigger virtual pixel. That’s especially helpful in circumstances of lower light, where large pixels are pretty good at keeping image noise in the bay.

The technology typically combines four real pixels into one virtual “bin” pixel. Still, the S20 Ultra from Samsung combines a 3×3 group of actual pixels into one virtual pixel, a method that the Korean company calls “nona binning”. When it’s bright out, you don’t have to think about noise, the camera will take a picture with no binning, by using the full image sensor resolution. That’s useful to print big photos or crop them in on an area of interest.

What is the right time to use pixel binning?

Most people would be satisfied with shots of lower resolution, and this is the recommended standard by many experts after reviewing the new Samsung Galaxy phones. The primary justification to use pixel binning is improved low- efficiency but it also prevents full- picture giant file sizes that can swallow storage and online resources like Google Photos on your smartphones. For instance, you can capture 3.6 MB with pixel binning at 12 megapixels and 24 MB at 108 megapixels.

When its bright, photo enthusiasts are more likely to want to use full resolution. That could help you recognize distant birds or take more photos of distant subjects of a more dramatic nature. And if you’re going to print big pictures, it’s all about megapixels.

Samsung Galaxy S20 Said to Debut With Super ISO and Quick Take Camera Features

Can a Samsung’s 108-megapixel S20 Ultra shoot better than a professional Sony A7r IV 61-megapixel camera?

GSMArena.com

Not really. As well as other factors like lenses and image processing, the size of each pixel on the image sensor also matters. There’s a reason why the Sony A7r IV costs $3,500 and takes pictures only while the S20 Ultra costs $1,400 and can run many other applications as well as make phone calls.

Imaging sensor pixels are squares with a diameter of one-millionth of a meter, or microns. A human hair is about 100 microns wide. Every pixel on the S20 Ultra is 0.8 micron wide. A virtual pixel is 2.4 microns wide, with Samsung’s 3×3 binning. A pixel is 3.8 microns via on a Sony A7r IV. This means the Sony can capture 2.5 times lighter per pixel than the S20 Ultra with 12-megapixel binning mode and 22 times more than in full-resolution mode with 108-megapixel — a significant improvement in image quality.

However, phones are closing the quality and performance gap of an image, mainly when using computational photography technologies such as combining multiple frames into one shot. And image sensors are getting bigger and bigger in smartphones to improve efficiency.

Why is pixel binning technology becoming prominent?

Since miniaturization has made smaller pixels ever possible, “What has propelled binning is this new trend of submicron pixels,” those less than 1 micron long, said Devang Patel, a senior marketing manager at OmniVision, a top manufacturer of image sensors. Getting plenty of those pixels lets smartphones makers desperate to make the phone stand out this year to talk of plenty of megapixel scores and 8K images. Binning technology allows them to think of that without losing sensitivity to low light.

Do smartphones need distinctive sensors for pixel binning technology?

The primary sensor is the same, but a shift in the colour filter array on an aspect connected to it affects the way the sensor collects red, green and blue light—conventional checkerboards of the Bayer system swap colours with each adjacent pixel. But the sensors organize the same-colour pixels in 2×2, 3×3 or 4×4 classes for the pixel binning. Pixel binning is possible without these groups, but it requires extra processing, and Patel said image quality suffers somewhat.

The Samsung Galaxy S20 Ultra uses group binning with 3×3 pixels. Huawei’s P40 Pro models prioritize low-light output with 4×4 pixel binning. While the Mi 10, Xiaomi took a different 2×2 strategy, so the pictures are 108 megapixels at full resolution images and 27 megapixels with pixel binning. (“SedecimPixel,” named for the 16:1 ratio) an incredible 4.5 micron across for simulated pixels.

For the image processing of hardware and software optimized for standard Bayer pattern pixel data, grouping pixels into larger simulated pixels works well. But for high-resolution images, it applies another stage of image processing (called demosaicking, if you’re curious) that essentially creates a finer-detail Bayer pattern on the sensor from the coarser pixel colour groups.

Is it possible to shoot specific with pixel binning?

Shooting lovers like raw images’ versatility and image quality — the unprocessed camera sensor data, presented as a DNG file. Pixel binning fits just fine with them. Still, if you want the data in full resolution, sorry. LG and Samsung don’t share it, and raw processing tools like Adobe Lightroom assumes a typical Bayer pattern, not 2×2 or 3×3 patches of the same colour clustered pixel groups.

What are the drawbacks of pixel binning technology?

According to Judd Heape, a senior director at smartphone chipmaker Qualcomm, 12 actual megapixels will do a little more than 12 binned megapixels with the same sized sensor. The sensor, too, would probably be less expensive. And when you shoot at high resolution, you’ll need more image processing, which will reduce your battery life.

You will get more exceptional sharpness with a standard Bayer pattern for high-resolution images than using a binning sensor using 2×2 or 3×3 groups of pixels of the same colour. But the problem isn’t too bad. “With our algorithm, we’re able to recover anywhere from 90% to 95% of the actual Bayer image quality,” Patel said. Comparing the two techniques side by side photographs, you couldn’t say it a difference in complicated conditions like fine lines outside lab test scenes.

Just in case, you forget to turn your camera to pixel binning mode, and then take shots of high resolution in the dark, the accuracy of the picture will suffer. LG is trying to compensate in this case by mixing several shots to try to minimize noise and increase the dynamic range, Huang said.

The regular cameras can also use the pixel binning technology judging by some full-frame sensor designed by Sony, the top image sensor maker right now.

What is the potential future of pixel binning technology?

There are some potential innovations. Increasing j4x4 pixels binning may enable phone makers to push sensor resolutions well beyond 108 megapixels. Lower-end smartphones can also get the binning option.

Another approach for better photography is HDR, or high dynamic range, capturing a more significant period of bright and dark image images. Small camera sensors fail to record a wide variety of complexities which is why companies such as Apple and Google merge several shots to produce HDR photos computationally.

Pixel binning also means higher flexibility at the pixel level. You can commit two pixels to regular exposure in a 2×2 group, one to a darker exposure to capture highlights such as bright landscapes and one to a more luminous sensitivity to capture shadow details.

Omnivision plans enhancements to the autofocus, too. Growing pixel today has its microlens, which is designed to collect more light. You may also place one single microlens over a set of 2×2, 3×3 or 4×4. Each pixel with the same microlens is given a marginally different view of the scene based on its placement, and the disparity helps a digital camera to measure focal length. This should help your camera maintain a sharp focus on the photo subjects.

Related Articles

Latest Articles