Jump to content

Multi-exposure HDR capture: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
MartinBot (talk | contribs)
m BOT - rv 86.4.124.210 (talk) to last version by 211.120.236.6
Wedesoft (talk | contribs)
Added link to QtPfsGui
Line 89: Line 89:
* [http://luminance.londonmet.ac.uk/webhdr/ WebHdr] Offers much information as to the nature of HDR imaging and offers a web based HDR image creation service
* [http://luminance.londonmet.ac.uk/webhdr/ WebHdr] Offers much information as to the nature of HDR imaging and offers a web based HDR image creation service
* [http://www.hdr-cam.com/ HDR-Cam] Information about the HDRi camera system used for “photo-realistic lighting” of CG characters in VFX, accurate rendering with Global Illumination and Image based techniques.
* [http://www.hdr-cam.com/ HDR-Cam] Information about the HDRi camera system used for “photo-realistic lighting” of CG characters in VFX, accurate rendering with Global Illumination and Image based techniques.
* [http://qtpfsgui.sourceforge.net/ QtPfsGui] is a free HDR-workflow software for Linux, Windows and Mac OS X
<!-- Please stop adding the Flickr "HDR" group. See the talk page for my reasoning and any discussion. -->
<!-- Please stop adding the Flickr "HDR" group. See the talk page for my reasoning and any discussion. -->



Revision as of 19:02, 5 June 2007

An example of a rendering of an HDRI image into an 8-bit JPEG. This image is of the Tower Bridge in Sacramento, California.

In computer graphics and photography, high dynamic range imaging (HDRI) is a set of techniques that allow a far greater dynamic range of exposures (i.e. a large range of values between light and dark areas) than normal digital imaging techniques. The intention of HDRI is to accurately represent the wide range of intensity levels found in real scenes ranging from direct sunlight to the deepest shadows.

HDRI was originally developed for use with purely computer-generated images. Later, methods were developed to produce a HDR image from a set of photos taken with a range of exposures. With the rising popularity of digital cameras and easy to use desktop software, the term "HDR" is now popularly used[1] to refer to the process of tone mapping together bracketed exposures of normal digital images, giving the end result a high, often exaggerated dynamic range; however, in this case neither the input nor the output qualify as "true" HDRI.

Recently, CMOS image sensor producers have begun to release sensors with HDR up to 110 db for security cameras.[2]

History

An example of a rendering of an HDRI image into an 8 bit png. In Victoria, British Columbia

The use of high dynamic range imaging in computer graphics was pioneered by Paul Debevec. Debevec is thought to be the first person to create computer graphic images using HDRI maps to realistically light and animate computer graphics objects.[citation needed] Gregory Ward created the Radiance RGBE image file format in 1985, which was the first and still is the most commonly used file format for high dynamic range imaging today.

Comparison with traditional digital images

Information stored in high dynamic range images usually corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called "scene-referred", in contrast to traditional digital images, which are "device-referred" or "output-referred". Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called "gamma encoding" or "gamma correction". The values stored for HDR images are often linear, which means that they represent relative or absolute values of radiance or luminance (gamma 1.0).

HDR images require a higher number of bits per color channel than traditional images, both because of the linear encoding and because they need to represent values from 10−4 to 108 (the range of visible luminance values) or more. 16-bit ("half precision") or 32-bit floating point numbers are often used to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts [3][4].

Sources

New York City nighttime tone-mapped image.

HDR images were first produced with various renderers, notably Radiance. This allowed for more realistic renditions of modelled scenes because the units used were based on actual physical units e.g watts/steradian/m2. It made it possible for the lighting of a real scene to be simulated and the output to be used to make lighting choices (assuming the geometry, lighting, and materials were an accurate representation of the real scene).

At the 1997 SIGGRAPH, Paul Debevec presented his paper entitled "Recovering High Dynamic Range Radiance Maps from Photographs"[5]. It described photographing the same scene many times with a wide range of exposure settings and combining those separate exposures into one HDR image. This HDR image captured a higher dynamic range of the viewed scene, from the dark shadows all the way up to bright lights or reflected highlights.

A year later at SIGGRAPH '98, Debevec presented "Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography"[6]. In this paper he used his previous technique to photograph a shiny chrome ball to produce what he called a "light probe", essentially a HDR environment map. This light probe could then be used in the rendering of a synthetic scene. Unlike a normal environment map that simply provides something to show in reflections or refractions, the light probe also provided the light for the scene. In fact, it was the only light source. This added an unprecedented level of realism, supplying real-world lighting data to the whole lighting model.

HDRI lighting plays a great part in movie making when computer 3D objects are to be integrated into real-life scenes.

Tone mapping

One problem with HDR has always been in viewing the images. CRTs, LCDs, prints, and other methods of displaying images only have a limited dynamic range. Thus various methods of "converting" HDR images into a viewable format have been developed, generally called "tone mapping".

Early methods of tone mapping were simple. They simply showed a "window" of the entire dynamic range, clipping to set minimum and maximum values. However, more recent methods have attempted to show more of the dynamic range. The more complex methods tap into research on how the human eye and visual cortex perceive a scene, trying to show the whole dynamic range while retaining realistic colour and contrast.

Examples

It helps to have some example images to demonstrate the usefulness of High Dynamic Range Imaging. The following examples use an image rendered with Radiance using Paul Debevec's well-known light probe of the Uffizi gallery.

Exposure

Three exposures of the same image

Here the dynamic range of the image is demonstrated by adjusting the "exposure" when tone-mapping the HDR image into an LDR one for display. The middle exposure is the desired exposure and is likely how this scene would normally be presented. The exposure to the left is 4 f-stops darker, showing some detail in the bright clouds in the sky. The exposure to the right is 3 f-stops lighter, showing some detail in the darker parts of the scene.

Blur

LDR and HDR gaussian blur of the same image

Here a gaussian blur operation demonstrates how the out-of-range values of a HDR image can still be useful, even if they are normally clipped when converted to an LDR image. The left image has been blurred in The GIMP using a tone-mapped LDR version of the image. The one on the right has been blurred with the pgblur tool in Radiance using the original HDR image, and then tone-mapped for display.

Although the two images are very similar, the obvious difference is in the highlight on the shiny chrome sphere. In the original HDR image these pixels have very large values. When the image is blurred, the surrounding pixels have their values "pulled up" and are clipped to maximum when tone-mapped. Of course the highlight pixels also have their values "pulled down" by the surrounding pixels, but their values are so high that they remain above the upper clipping value when tone-mapped. The effect is that a larger area is now maxed out to white.

With the LDR blur however, the pixels in the highlight have already had their values clipped to maximum before the blur is performed. This has reduced their value a great deal. The result is that after the blur, the pixels around the highlight do not have high values and even pixels within the highlight have had their values pulled down by the darker pixels around the highlight. The highlight is now just a mid-tone smudge, not looking very bright at all.

LDR and HDR motion blur of the same image

The same is true of simulated motion blur, a common special effect.

See also

References

  1. ^ "Flickr: HDR". Retrieved 2007-01-29.
  2. ^ "High dynamic range CameraChip sensor".
  3. ^ "High Dynamic Range Image Encodings" by Greg Ward, Anyhere Software
  4. ^ "Perception-motivated High Dynamic Range Video Encoding" by Max Planck Institute for Computer Science
  5. ^ Debevec, Paul (1997). "Recovering High Dynamic Range Radiance Maps from Photographs". {{cite web}}: Check date values in: |date= (help)
  6. ^ Debevec, Paul (1998). "Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography". {{cite web}}: Check date values in: |date= (help)