Home / HDR & Plenoptic Cameras: The Future of Digital Photography?

HDR & Plenoptic Cameras: The Future of Digital Photography?

Please Share...Print this pageTweet about this on TwitterShare on Facebook0Share on Google+0Pin on Pinterest0Share on Tumblr0Share on StumbleUpon0Share on Reddit0Email this to someone

Plenoptic Example
A Stanford grad student, Ren Ng, recently developed a “plenoptic camera” that allows photos to be focused after they are taken. The camera, whose workings are documented here and here, allows a single snapshot to later be focused at any distance. This struck me as an amazing advancement for designers, as refocusing an image is something I always wish I could do when making a composition. This announcement made me stop and think for a moment about how to further improve the lives of designers. I was eventually reminded of another such process that improves life for designers: HDR photography.

You may have used HDR photography before or at least heard of it, but for the benefit of those not familiar with the process, I’ll offer a little primer. HDR stands for High Dynamic Range. The term refers to images that contain large variations of light. For example, a picture of a block of matte black plastic taken on a cloudy day with undirected interior light is Low Dynamic Range; whereas, a picture of sunrise over the grand canyon on a clear day is High Dynamic Range.

Most photographers agree that images with a High Dynamic Range are more interesting subjects. Now we get to a problem. Cameras have a limited range of brightness that their CCDs can handle. Anything above their maximum brightness is burned out to white, and anything below their minimum brightness is pure black. This is bad for both photographers and designers, because any detail that was in an area above or below the range of brightness is lost forever. Enter HDR photography.

HDR Example

HDR photography involves using a tripod to capture multiple images of a subject under different exposures. You aim for a spread such that the longest exposure shows details in the darkest areas, and the shortest shows the details in the brightest areas. There should be no pixel in the scene that is overpowered or underpowered for the entire series. These photos are then processed, and each pixel is assigned a brightness value in floating point, thereby giving an almost limitless dynamic range. The resulting picture looks similar to one taken at a set exposure, except that you can then “choose your exposure”, allowing exposures as short or as long as you like. You can even change the exposure on a single portion of the image. Do you see the resemblance to the “plenoptic camera” yet?

Photoshop CS 2 already supports HDR images. With a little tweaking, I’m sure it could support plenoptic images as well. Here lies my idea.

I’d like to see a camera with HDR implemented onboard, where you can have it automatically take a sequence and convert it to HDR. I’d like said camera to also be plenoptic. The immediate improvement for designers is obvious. Take one shot and be able to refocus and change the exposure all you want, after your trip, in the safety of your editing suite. Immediately, shots of fleeting lighting conditions can be perfected after the fact, and taking multiple shots of the same scene focused differently is a thing of the past.

I realize that this is computationally complex, and that the resulting photos would be huge, but if this technology isn’t in production by 2010, camera makers are dragging their feet.

Get More Information:


Powered by

About gschoppe

  • Wow! Great story Gregory, thanks for posting it. Can’t wait to get my hands on an HDR Plenoptic camera.

  • Very interesting wish, Greg. The mix of HDR and Plenoptic ranks right up there with a beautiful woman who is also humble and a fulfilling career that pays well. 🙂

    But seriously, I suspect the two formats have yet to be merged, because they negate each other. The ability to refocus probably needs a set exposure, and vice versa.

  • From what I understand of plenoptic cameras (which is limited by the little the press knows) it seems like a combination should be possible. Plenoptic cameras take one shot, and record the angle and intensite of light hitting the pixel, what this really means (i think) is that each “micro lens” covers several (16 or so) actual pixels in the captured image. these pixels will show the image as seen at that “meta pixel” with different focusing (because the lenses have different focal lengths from different angles. by selectively choosing which pixel to show, and a bit of interpolation, the camera can give you your choice of focus. This method should not effect exposure in the least. an HDR Plenoptic camera would simply take a spread of plenoptic exposures, then convert the plenoptic images to HDR, so the resultant focused image is also HDR. Remember, most of this is educated guesswork, as there isn’t much in the way of documentation for plenoptic cameras, but here’s hoping.

  • Either way, both technologies seem promising. I guess I was trying to think of it in literal means. I.e. when I shoot a cityscape at night, hypothetically each single shot I took would have 4 possible focuses and 4 possible light exposures, which would be like your camera actually takes 16 shots that you can mix and match back into the optimal single shot.

    Nothing like having options, eh? Good post and follow up btw.

  • thanks