Download our latest NYC retail store ad here and get a jump on your back to school savings!
 

New Kodak sensor tech Q&A Thursday, June 14 2007

New Kodak sensor tech Q&A


Kodak has provided the following in-depth Questions-and-Answers with John Compton and John Hamilton, "technology troublemaker" and "algorithm agitator" (respectively), in which they discuss the new Kodak image sensor that was announced today. For the common mortal, this is probably way too much information; suffice to say the before-and-after images shown here tell the story. But if you're nerdy enough, this is good readin'.

Color Filter Array 2.0

Have you ever noticed that digital cameras don't seem to work as well as you want under low light conditions? Images often come out dark, or noisy, or blurry - just because the sensor isn't sensitive enough.

Kodak has been working to improve sensor performance - so that image sensors can make better use of the light that comes through the lens. And after several years of work (involving a whole team of Kodak scientists). Earlier today, Kodak announced a new image sensor technology that addresses this need directly, giving image sensors a 2x to 4x increase in sensitivity - the equivalent of one to two photographic stops.

There's a lot to cover in explaining how this really works. Thinking it through, it seemed that a good way to talk about this was to ask the inventors, John Compton and John Hamilton, a few questions. We just hope we asked some of the questions you make have wanted to ask.

Conventional Bayer Color Filter Array

The Interview:

What did you set out to do when you began this work?

JC: We wanted to enhance the low-light performance of digital sensors, to see what it would take to get better sensitivity.

JH: For years the industry has been using Bayer filter array that was invented by Kodak researcher Bryce Bayer back in 1976 - we wanted to see if we could improve on that.

The Bayer filter is found in most of today's sensors. What's been holding the other designs out?

JH: The simplest answer is that the Bayer pattern works so well - it provides excellent color reproduction from a single image sensor. And while the overall pattern used has stayed the same, there have been some minor tweaks along the way. The green channel has been opened up a bit to get more photographic speed, and the processing software continues to be optimized for computational speed and color accuracy.

JC: Another thing that has helped has been the broad acceptance of this pattern in the industry. To support video, for example, most consumer cameras today use a hardware-accelerated path to process this data and retain fast frame rates. Because this hardware solution is based on the Bayer pattern, there is a lot of inertia not to displace this pattern gratuitously.

Do you have more than one pattern that you can use in this approach?

JH: Yes, we've developed several patterns, allowing us to adapt this approach to different sensor architectures and applications. For example, different patterns might have different levels of image processing associated with them - you might use a pattern with lower image processingjohn.compton requirements in an application where you don't have access to as powerful a processor as you might need for a different pattern.

Different patterns that can be used

These new designs use both panchromatic1("clear") and color filters. What is the advantage of adding panchromatic pixels to the sensor? JC: The real advantage is that the panchromatic pixels are more sensitive, since they detect all wavelengths of visible light (rather than filtering light to detect color information).

JH: One way that helps to think about this is to look at it in terms of luminance and chrominance. In the original Bayer design, the green pixels are used to recover most of the luminance information from the image. Now, we are using panchromatic pixels - which are more sensitive than green pixels, because none of the photons get filtered out or wasted - to act as the luminance. This gives us a more sensitive luminance channel in the final image, which raises the sensitivity of the entire sensor.

Composing an Image

You could say that a sensor system is composed of roughly three parts: a pattern of color filters, the hardwired photoreceptors, and software to interpolate the data and reconstruct the image. Which of these are modified in this new technology?

JH: Clearly the color filter pattern and the software interpolation are different with this approach. What's more, the arrangement of the photoreceptors can be changed, but that's not a requirement. Depending on the application and light levels, you may want to combine neighboring color pixels (of the same color) to match with the more sensitive panchromatic pixels. While you can always do this after the entire sensor has been read out, you might want to do this by "binning2" pixels directly on the sensor - which sometimes requires different wiring of the photoreceptors.

Does this new technology work for both CCD and CMOS image sensors?

JC: Yes, it does - for the most part, we are only changing the color filter layer of the sensor, so we can use this broadly across both sensor technologies. CMOS does have some advantages, however, including some difrferent ways to apply binning as well as the opportunity to include the new image processing algorithms directly on the image sensor itself. Do sensors need to be completely redesigned to incorporate this new technology?

JC: The filter sets certainly need to be changed and this has some implications for the underlying process. But it is pretty straightforward to implement a change from Bayer Pattern sensors to these new designs.

It sounds like the software used to reconstruct these images is complex. What are some of the problems you needed to work through?

JH: The Bayer filter pattern has a very tight 2x2 repeat pattern. So, for a red pixel, you're never more than two pixels away from another red pixel. One of the new patterns uses a pan checkerboard and on the complement of that checkerboard, there is a pair of reds, a pair of greens, another pair of greens and a pair of blues.

Finding the right color edge of something can be a challenge. You've got to tie the color edge to the pan image, which gives you a good idea of where that edge is. What you'd like to do is bring the color out to the edge, but keep it from going any further. If you hold to these edges, it's had to do the noise cleaning because that is done by averaging pixels that you expect to have about the same value. If you're not careful, you'll be averaging pixels on either side of the edge and you'll ge what we call "color bleed." For instance, if you have skin next to blue jeans, you'll see a cyan halo on the hand.

And if you overclean the image, it looks like plastic, because it is just too smooth. So, it's hard to get the right amount of cleaning ensuring that you reduce the noise, and at the same time, keep the edge definition reasonably good. And so there's been a lot of work done on finding the best way to do that.

Do you get a more detailed image by using panchromatic pixels?

JH: Not really. Image detail comes primarily from the luminance channel of the image. In a Bayer pattern sensor, half of the total pixels are arranged in a green checkerboard and are used for luminance. In these new designs, half of the total pixels are arranged in a panchromatic checkerboard and used for luminance. We still have the same amount of luminance data - we're just getting it with higher sensitivity than before.

The color information comes along in a similar way. In a Bayer pattern, you have red and blue to help put the rest of the color together with the green edge. In the new design, the red, blue and green help to put the color back together for the pan record. Instead of two chrominance channels (red minus green and blue minus green), we really have three (red minus pan, green minus pan and blue minus pan).

In what situations does this design offer the biggest improvements?

JH: In situations where you want more sensitivity to light. In a low-light situation, these new patterns will produce a lot less color noise than a Bayer pattern sensor. You can run the shutter faster, which gets rid of a lot of motion artifacts. It will cut down on camera shake or, if you're taking a picture of a moving object there will be less blur. Both situations are illustrated in the following image pairs.

JC: Another way to think of this is that you have the same number of photons coming into the new sensor as you would with the Bayer pattern. It's just that the new filter arrays waste fewer of the photons since fewer of them end up absorbed in a color filter.

This new technology seems to work more like human vision - a combination of color pixels and panchromatic pixels, just like rods and cones in the eye.

JC: Actually the human retina has the best color perception in the fovea, the little tight area in the center of the retina. That is where most of the cones are. Elsewhere in the eye, you find mostly rods. And they let you see black & white. Your color vision relies more on black and white. So these sensor patterns are really different in that the color and the panchromatic pixels are distributed uniformly across the sensor.

JH: But in terms of luminance, the human vision systems has better resolution capability than color perception acuity. We take advantage of that in JPEG and we take advantage of that in NTSC. And now we're taking advantage of that in the design of image sensors as well. We see more shades of gray than we see different colors. We're very adaptive to light. It's a logarithmic system and that's why we go through so many different ranges of brightness.

This sounds pretty cool - when will cameras be available that use sensors with this new technology? JC: Samples of the first sensor with this technology should be available in the first quarter of 2008. Once that is available, some additional time will be needed by camera manufacturers to design, develop and manufacture a camera using this sensor. So we're hoping it's not too much longer after that.

So - what are you guys working on next? JH & JC: Sorry - we can't answer just yet, but we've got plenty of ideas in the hopper. So stay tuned.

Footnotes:

1. Panchromatic - Simply refers to light with all colors, which is another way of saying white light.

2. Binning - Pixels work by converting incoming photons into electrons, which are accumulated in each pixel during the exposure (like holding a bucket under a hose for a period of time). After that, the electrons provide a signal for each pixel that can be read out (like measuring the depth of water in the bucket). Alternatively, the electrons from two or more pixels can be combined, or binned, to provide a larger signal to read out (like dumping the water from several buckets in a single bucket and the measuring the depth of water in that bucket). This binning provides a more robust signal, but obviously the contributions of the individual pixels (like the water in the individual buckets) cannot be determined after binning, so there is a loss of resolution.


© 2007 Adorama

 
 
 
 
Printable Version
  RSS Feed
 
Adorama Academy
 
 

Got a hot news item? let us know! e-Mail newsdesk@adorama.com.