Some time ago I posited the notion that as technology continues to cram more and more pixels onto DSLR sensors, that growing capacity could be used not merely to make higher resolution images, but alternatively to gather more dynamic range with each exposure. Instead of bracketing a given shot twice, a 24 mega-pixel sensor could do a virtual and simultaneous over-under bracket to render an 8 mega-pixel image with that added dynamic range delivered with a single click. I lack the technical know-how to say for sure if this is possible, but to me it’s an intriguing idea.
Looking at graduated ND filters this morning, I was then wondering if something might be done to simulate this as sensor/CPU technology continues to grow more sophisticated. As you may know, a graduated neutral density filter allows more light to pass through one side of the filter than the other, and is commonly used on landscapes that contain a sky which is much brighter (or darker) than the foreground. With the darker portion of the ND filter rotated to the top of the image, less light from the sky passes through that area of the filter, allowing the lower portion to be properly exposed and avoiding the sky being blown out or the foreground being too dark. Without such a filter, we sometimes do two or more exposures to get multiple portions of the scene exposed properly, then combine those portions in Photoshop.
But if a camera had a new kind of control over the behavior of its sensor, it might be possible to simulate a graduated ND filter via the LCD screen. One could, I suggest, pick a certain area of the sensor to become less sensitive to light. This might be done via ISO settings, so that if you were in a situation where an ND grad would be handy, you could set the camera to expose the bottom half of the sensor at ISO 800 and the top at ISO 200, then line up your shot with an overlay on the LCD of where that delineation would take place. Bam, digital in camera ND grad filter.
Of course this would require the camera’s CPU to be able to control banks of pixels independent of other banks. And if it could do this with ISO, why not with shutter speed, too? So you might have the ability to expose part of the sensor at 500/sec, and another part and 150/sec, giving further control over the exposure. The shutter would open to accommodate the longer exposure, but the sensor itself would activate for the exposure a given section for its assigned duration.
The Nikon D700 already has something that is at least similar to this concept for focusing; I can set the camera to take the entire image into account during auto focus, a single one of the 51 focus points, or a point surrounded by a given area. So the CPU CAN think of the image area as consisting of adjustable portions to be evaluated with that type of flexibility. For now it just doesn’t have this type of low-level control over the sensor itself.
But the way camera technology is going, it seems only a mater of time before we might have this or some similar capability. Personally, I can’t wait!