Highslide JS
Professor Kirt Witte: Selfportrait with Mirror Ball

Professor Kirt Witte teaches courses on Visual Effects and HDR Imaging at the Savannah College of Art and Design (SCAD).
He is a certified Maya instructor, a passionate panoramic photographer, a close affiliate of HDRLabs, and overall a really cool guy. His students learn everything about shooting HDRIs, applying them as lighting in Maya, rendering with mental ray. In short, his courses really get you ready for production, and his students regularly get hired at prestigious VFX houses like ILM.

This page summarizes the most frequently asked questions Prof. Witte encountered in his HDR courses.
  • General
  • What kind of camera will I need?
    A digital SLR camera with interchangeable lenses is the ideal solution, but almost any camera will work if you can lock the aperture, lock the focus, and lock the white balance. Ultimately, you get what you pay for. Digital SLRs generally have more options for white balance, mirror lockup, etc… the more choices, the better!

    If you want a camera with the most convenient HDR shooting setup, choose a camera that has an auto-bracketing (AEB) option. Click here for a good list of cameras that have auto-bracketing (3 shots are common, but 5 or 7 is better)

    It also depends on how many HDRs you will be shooting and for what purpose?

    If you are just a hobbyist, then shooting JPGs with a point and shoot camera will do just fine if you have the ability to lock focus, white balance, and aperture. Then you just bracket your shutter speed, take your pictures, and make your HDRs! I do advise a camera that will allow you to shoot in RAW or JPEGs. The Canon G10 is a compact camera (for example) that will also shoot RAW. Shooting RAW has many advantages that are listed below.

    A few other considerations: How many Frames Per Second (FPS) can your camera capture? Speed is often a factor in capturing high quality HDRs. Similarly, the memory buffer (onboard RAM) is also potentially a major factor. If the camera can shoot 10 to 15 RAW shots without a slow down, then that will greatly increase your speed when shooting. (Sidenote – the write speed of your memory cards also effects this as well. Buy the faster ones that are more expensive, it is worth it!)

    Back to top
    Was this helpful?
  • Do I always need an HDR?
    Definitely not. High Dynamic Range (or even Medium Dynamic Range) implies you have a high contrast lighting situation. This is obviously not always the case. (i.e.: a foggy morning on an overcast day). If you are shooting in a low contrast lighting situation (i.e.: foggy morning or an overcast day) then you have very little need for an HDR file.


    Back to top
    Was this helpful?
  • Will HDRs make my lighting (in my 3D scene) “perfect” every time?
    Actually, no. Many people think that HDRI is the holy grail of CG-lighting. It is a huge advancement, but is not perfect for all situations. Often, HDRI will get you about 90% of the way, but you may need to add an additional light or tweak a shadow in order to get it to perfectly match your scene. However, quite often, just using a single HDR file will get you where you need to be. Your mileage will vary depending on the strictness and time constraints of your project.

    Similarly, some people/companies just use HDRs for realistic reflections only. They may have a really good CG Lighter on staff that will do the primary lighting of a shot or maybe they use a gray/diffuse ball as their primary lighting reference.

    Back to top
    Was this helpful?
  • Is there another name for High Dynamic Range Imaging?
    Yes, some argue that the proper name should be High Fidelity Imaging.

    Back to top
    Was this helpful?
  • What are MDRs? LDRs? and SDRs?
    These are abbreviations for Medium Dynamic Range, Low Dynamic Range, and Standard Dynamic Range. They all refer to an average scene that you might shoot in normal fashion, i.e. non-HDR.

    Back to top
    Was this helpful?
  • What are the differences between a light probe, spherical image, LatLong, vertical cross, cubic (etc) versions of HDRs?
    Depending on what application you are using, the names of the shapes of the HDR will change.

    Most 3D programs want/need an image that looks like an unwrapped world map. This file can be called “Equirectangular”, “Spherical”, “LatLong”, and also a “Latitude Longitude” file. These are called different names, but they are all exactly the same!

    Vertical Cross, Horizontal Cross, and Cubic HDRs… these are all just six 90-degree views of a scene or panorama. They just happened to be place together in a certain shape (vertical or horizontal cross) or saved out as six separate cube faces as six different files. Retouching is pretty easy with these as long as you do not try to retouch any of the edge pixels. If you need to do some cloning or color correction across the seams, then convert the image to a spherical image and work on that. Then convert back to a cubic format if needed.

    “Light Probes” are full 360x180-degree spherical (or LatLong) HDR files that have been converted into a ball shape. Visually, it appears that they are chrome balls, but in reality they have a mathematically much simpler distortion and they may or may not have been created and captured that way. A few 3D programs directly support Light Probes (ball-looking) images, but almost all support LatLong shaped HDR images. When you make an HDR from a series of pictures of a chrome ball, this too, is considered a “light probe”. In fact, any fully immersive HDR panorama can be considered a light probe, no matter what unwrapping format it is in. They are also sometimes called "Light Maps" as well.


    NOTE – be sure to read the section below on panoramic conversions!

    Back to top
    Was this helpful?
  • Can my clothing/color choices cause problems?
    Definitely! If you are shooting interiors with mirrors or very reflective surfaces, the clothing you wear can definitely cause you headaches and more post-processing time later. Do not wear bright color clothes or clothes with busy patterns. Just wear dark, plain clothes to minimize your reflections. (I learned this one the hard way!) ;)

    Another easy one that can save you some post-processing later, is to position your tripod so that the shadows of two of the legs line up exactly. Later, you will only have to retouch 2 shadows for the nadir, and not 3.


    Shooting with a fisheye? Watch your feet! At 180-degrees, it is much easier to get your feet in the shot than with any other lens. I have been surprised many, many times!


    Back to top
    Was this helpful?
  • Shooting Method
  • 4 Rules of shooting HDRs
    • Lock f-stop (aperture – which controls your depth of field)
    • Lock focus
    • Lock white balance
    • Turn off any in camera "automatic" image enhancing (i.e.: auto-contrast or auto-saturation, including sharpening)

    You will be bracketing the exposure time for your various exposures.

    Back to top
    Was this helpful?
  • Other Rules…
    Shoot fast – People move, cars move, lighting can change quickly.
    (But be aware in long exposures, you may need to use your camera’s Mirror Lockup function to minimize shaking the camera. In this case, shooting fast is not advisable.)

    If your camera can shoot in burst mode, then shoot all three (or more) images all in a row as quickly as you can. Sometimes you will need to wait a few minutes for the just right opportunity.

    If you shoot in sand or loose dirt, then tripod spikes can help stabilize your tripod. Another trick is to slice open three tennis balls (one per leg) and put them on the end of your tripod. This will help to distribute the weight of the camera and tripod. This is also helpful for not scratching up hardwood and marble floors.

    Also – “Shooting Fast” does not imply that you should be in hurry when you are getting setup. Take your time and do it correctly, otherwise if you rush too much, you may overlook something and then when you’re stitching your final HDR pano, you then realize you missed a few shots. Many times, you cannot do a reshoot.. So be careful when getting set up. Don’t kick or bump your tripod or you will have to start over.

    Back to top
    Was this helpful?
  • Mirror Lockup
    This is a much needed professional feature of many digital SLRs. If you are going to be shooting in low-light situations for your HDRs and you will be taking long exposures to create your HDRs, then using the Mirror Lockup feature is highly advisable in order to minimize internal camera shake caused by the mirror opening and closing. If you are in bright light shooting fast shots, then this is really not an issue, but if you are shooting anything lower than 1/15th of a second, then you should probably shooting with the mirror lockup feature.

    NOTE – This will slow down your ability to shoot with “Speed” that I listed above as a good thing. Speed will probably do more harm than good in low light situations, so adjust accordingly.

    Back to top
    Was this helpful?
  • Cameras
  • Film or Digital capture?
    I recommend digital cameras.

    Film cameras can be very problematic in that that have very limited exposure latitude (ability to handle over/underexposure). This is especially true for slide/chrome films with an even smaller exposure latitude. Films are becoming increasingly more difficult to find. Film is either “daylight” or “tungsten” balanced and only 2 choices can be very limiting and frustrating. Similarly, film has to be processed and scanned (and often printed too) and there is the added dilemma of “human error” involved with how they are placed in the film holder, accuracy and consistency of the color, etc… Finally, there is no metadata! You have to be a master at writing down all your (or at least most) of your exposure information. As you shoot hundreds of shots (and have to switch film often because of the 36 exposure limit) it gets harder and harder to keep accurate records of your shots.

    Digital photos: Faster, instant gratification and knowledge that you got it right, ability to instantly view the histogram (on most DSLRs) of the image to truly know whether or not you have a good exposure. Image alignment is rarely a problem, all exposure information (f-stops, shutter speeds, etc) are written to each file in the metadata (on most digital cameras). GPS data can also be recorded with special equipment and cameras. No prints are needed. I just can’t think of a reason why you would want to shoot on film!

    Back to top
    Was this helpful?
  • Megapixels
    How many is enough?

    It depends on what your final output is going to be and how accurate you really need to be. If you are going to HD, then you would need a minimum of 4K horizontal resolution for your final spherical image. I say this based on using a wide angle virtual lens (ie: 90-degree field of view) and rendering to NTSC video or HD 720p. In order to have enough resolution to hold up properly, you will need “full res” pixels in the background. If your Lat/Long file is about 4000 pixels, then you will have about 1000 pixels in your background if you are using a 90-degree FOV lens (ie: 18mm) to render with. That also means if you use something like a 50mm lens, you will ALSO need about 1000 pixels in the background. However a 50mm lens gives you about 40-degrees FOV, so in that case, you’d need about an 8K Lat/Long file in the background.

    If you are only using the panorama as a reflection map, then you can get away with a lot less (about half?) resolution.
    Are you going to do a full 360-degree pan within your shot? If not, then you can by with less overall resolution.
    Is my 4-megapixel camera good enough? If you take twice as many shots, you will get the same resolution that is produced by an 8-megapixel camera. Time versus money is the answer there.

    Back to top
    Was this helpful?
  • Full frame sensor or smaller sensors?
    SLRs with full frame sensors are definitely more expensive. However, the larger and MORE pixels you have, the better overall quality, color depth, and sharpness you will have. I didn’t realize this until I started shooting with both. Images captured with a full frame sensor can generally be manipulated and even resized better than images captured on smaller size sensors.

    One major advantage of smaller sensors (besides lower cost) is that this created a magnification effect (Canon is 1.6x and Nikons are 1.5x) for lenses. In other words, put on your lens and the focal length will increase by around 50%) A 100mm lens becomes a 160mm. A 300mm becomes a 480mm incredible telephoto! So this is a great advantage if you do a lot of telephoto work.

    However, consider you have an older “non-digital” lens that is wide-angle like a 20-35mm lens. With the multiplication factor, this lens then turns into a 32-56mm! In other words, your really nice wide-angle lens just became a normal lens! This is one of the major reasons they started creating what are called “Digital” lenses. They have adjusted the lens elements/optics to work correctly on the smaller size sensors. They use less glass and therefore are generally less expensive as well.

    NOTE – Most “digital” lenses will NOT work on camera bodies with full frame sensors! Right now, this may not be an issue for most people, but in the coming years as hardware, software, and camera capture technologies advance… eventually most or all digital SLRs will have full frame sensors in them. This may be 5 to 10 years from now, but eventually it will happen. My point is do NOT invest too heavily in “digital” lenses as they will be obsolete in a few years.

    Back to top
    Was this helpful?
  • Canon vs Nikon?
    Well, I am going out on a limb here… that is like asking if you prefer a Mac or a PC!? But here are my thoughts:
    NOTE: these are generalizations based on my personal experience.

    As for the end results, the quality of the optics/images is probably about the same.
    Either one will do a great job!

    However, if price is a concern, generally Canon’s are about 20% cheaper for a similarly equipped Nikon. If the speed of autofocus is a concern, you probably want to go with Canon. Canon has their autofocus built into their lenses. Nikon chose (arguably a good or bad decision) to make their cameras compatible with ALL of the older lenses from the 60s, 70’s, and 80’s. Therefore they built the autofocus into the bodies instead of the lenses. This extra distance slows things down just enough for many sports/wildlife photographers to go with Canon.

    Canon has more full-frame sensor cameras available. Canon currently has more choices and higher HD Video resolution than similar Nikons. (1080p for Canon and 720p for Nikons. NOTE – I do expect this to change in the near future.)

    Nikon seems to be catering much more to HDR photographers. Generally they offer the ability to auto-bracket 5, 7 or 9 shots at a time. Most of Canon’s cameras only allow 3 shots for Auto-bracketing, which is very frustrating and why companies are coming out with various hardware workarounds using iPhones and Nintendo DS remote controllers. (more about that later). Some of the newer Nikons now have the option of saving your RAW files in either 12-bits or 14bits! (What a great feature!) I will also (grudgingly) admit that some of Nikon’s in-camera controls and options are more convenient and flexible for shooting HDRs. Most of the latest Canon cameras (even the prosumer Digital Rebel series) now capture in 14-bits!

    However, as a professional, I like more choices with my tools.
    One thing about Nikon’s I do not care for… most of them will only go as low as 200asa while most Canon’s can capture at 100asa! Nikon's version of 100asa is called Lo1 and derived from a digitally changing the gain on a 200asa capture. The Canon 5D Mark II (and a few others) can go as low as 50ASA! The lower and slower, the better the quality of the images. However, the advantage is marginal because Nikon sensors have less dark current noise to begin with.

    Another point to mention as well is that most Nikon cameras can only adjust their exposures (f-stops and ASA/ISO) in ½ stop increments. Canon cameras, on the other hand, allow you to fine tune your exposures by of an f-stop. Initially, I thought this was a big advantage for Canon because you have the ability/choice of really fine tuning your exposures. That makes sense, right? However, what I later realized is that first of all, there is not much difference in exposure between a and ½ f-stop, especially considering the wide exposure latitudes that 12-bit and 14-bit RAW files offer.

    MORE IMPORTANTLY in HDR terms, if you go with of a stop increments on a Canon camera, that just means that you have to physically touch and change the dial on your camera much more than if it was set to ½ stop increments. (Canon allows for the user to CHOOSE to use or ½ stops). The more you touch your camera while shooting HDRs, the more likely you will have alignment problems later when you are trying to create your Radiance (.hdr) files. So, Canon-users, just leave it at ½ and you will be better off for creating HDRs.

    Back to top
    Was this helpful?
  • Lenses
  • Lens choices
    Buy the best glass/lens you can afford. A cheap lens will always give you cheap images… they may not be sharp, vignetting, and chromatic aberrations. I recommend Japanese or German manufactured glass/lenses. Image stabilization is not needed if you’re shooting on a tripod. Remember: “You get what you pay for!”

    Canon makes a set of professional “L-Series” lenses that are a much higher quality than their normal lenses. These L-Series “Red Striped” lenses generally have very little chromatic aberration and have much improved light emission in the corners (light falloff) of your image. They are much more expensive, but are well worth it for professional purposes. (Speaking of Canon… read the next segment)

    Another issue/solution I want to mention here is the “Fix it in post!” adage. Software applications like Adobe Lightroom, Apple’s Aperture, and DXO’s Optics Pro applications can easily adjust for chromatic aberrations, light falloff, barrel distortion, fine tuning of your color balance, and much more. (Note - This is why I generally go from RAW to 16-bit TIFFs to HDR. I really like to be able to tweak my images!)

    Back to top
    Was this helpful?
  • Wider or longer focal length?
    Generally, the wider the better as it affords you to be able to shoot faster, save faster, and have fewer images to process and retouch. However, the drawback is that you get less overall resolution in the end if you shoot 6 images (for a horizontal row for example) than if you shoot 12 shots for a horizontal row. Wide angle lenses generally have more chromatic aberration as well. That is not a huge problem, but something to be aware of.

    However, if you will be shooting Gigapixel panoramas, then you will want to go with increasingly longer focal length lenses. (Stock up on more memory cards and hard drive space too!)

    Back to top
    Was this helpful?
  • f-stops
    Generally f8 or f11 are the sharpest overall areas within a lens. Please notice I did not say that you would get more in focus!? Yes, F22 or F32 will give you more depth of field, but the “middle” f-stops are generally the sharpest overall. I use f8 or f11 for almost all of my HDRs.

    However, the larger the aperture, the larger the lens flare(s) you will have also (assuming you have them in your shot). Therefore, closing down to a smaller f-stop (ie: f-22) can help minimize this problem.
    (This also affects lens flares, but this will be addressed later in this document.)

    Back to top
    Was this helpful?
  • Fisheye or Rectilinear?
    Personally, I advise using a really wide angle rectilinear lens (i.e.: Canon’s 10-22 EF-S lens). Rectilinear (also known as “normal” lenses) are easier to retouch and are supported by most stitching and photographic applications. Fisheye allows you to shoot fewer images, but you are a little more limited to how many applications that support fisheye stitching for. Retouching fisheye images can also be very challenging. (But they sure are fun to look at!)

    Some panographers argue that Fisheye is much better because it allows you to be much more efficient with your time. (ie: less images = less file size = less retouching/post-processing).

    Another factor is that fisheye lenses (ie: Nikkor 10.5mm fisheye) are generally much sharper in the corners than extremely wide rectilinear lenses like the Canon 10-22 EF-S or Canon “L” series EOS 16-35mm lens.

    Another issue is how most lenses deal with lens flares. Most fisheye lenses have an additional special coating on them to minimize lens flares. When shooting a lens that covers 180-degrees, it is often nearly impossible to not capture the sun or other light sources. (I will have more discussion on lens flares in another section of this document.). Whereas, most rectilinear lenses do NOT have any special coating, thus lens flares are generally much more pronounced on “normal” rectilinear lenses.
    Side note - The Sigma 8mm and Nikkor 10.5mm lens have a gold ring around them that indicates the nodal point of the lens! I sure wish all lenses had that marked.

    Ultimately, it is a personal choice on which type of lens to use and/or it depends on what the project needs (and the resolution needed for the final image as well.)

    Back to top
    Was this helpful?
  • Tripods & Panoheads
  • Leveling your tripod/camera
    To make a long story short, leveling your camera is extremely important when shooting cylindrical (horizontal) panoramas. When shooting spherical panoramas, it is also important, but most spherical stitching applications have a way to easily correct/set your horizon lines. So this is easy to “fix it in post”.

    If you only intend to shoot, or often only shoot cylindrical panoramas, then I highly recommend purchasing a leveling base plate. Without this accessory, you can spend up to 5 minutes trying to level your camera. But with this leveling plate, there are three adjustable (brass) screws that make leveling fast and easy. It has a built in bubble level to aid in the leveling process.
    NOTE – A leveling plate adds about 1 pound of weight to your tripod, so when traveling/hiking, it can definitely weigh you down and it may not be worth the extra weight. It just depends on your shooting/traveling conditions.

    Back to top
    Was this helpful?
  • Carrying your tripod “on location”, on vacation, or hiking
    There are many different ways to carry your tripod: full size bags, shoulder straps, on-tripod handles, tripod slots that are integrated into photo backpacks, or tripod “bags” that may clip on to your photo backpack. I have tried most of them and it usually just depends on where I am shooting and how long I will be on out.


    Back to top
    Was this helpful?
  • Have too much stuff to hold when you are shooting a pano?
    Then you may need the Manfrotto 166 Utility Apron.


    Concerned about the apron getting in the way? Don’t be!


    Back to top
    Was this helpful?
  • VR Heads
    There are many heads available, some obviously better than others. Be sure to buy one that will allow for SPHERICAL panoramas, and not just cylindrical panoramas only. Below is a comparison of two of the heads I commonly use. They both have advantages and disadvantages.

    Bogen / Manfrotto 303 SPH panohead

    Advantages: VERY solid and sturdy and stable. Can even support medium format cameras.
    Very rugged and durable. Clamps and locks are very tight and do not easily come loose.

    Disadvantages: Large and heavy. Fairly expensive. Weighs about 2x as much as my Nodal Ninja 5. Has fairly sharp corners that can damage furniture or even take out a tourist if you are not careful! ;) It is larger in size and fills more of the frame (footprint) of the image (See below), which means more retouching in the post-process stage. Also has many parts and screws and I have lost a few of them over the years.

    Nodal Ninja 5 Spherical Panoramic Head

    Advantages: Small, lightweight, very portable (has optional foam case), smaller image footprint, less expensive, simple to use and calibrate.

    Disadvantages: Not as stable, must really tighten things up to remain in place.
    Cannot handle as much weight, locking clamps/guards are made of plastic and tend to move with a lot of usage, especially if you are traveling or hiking.

    Back to top
    Was this helpful?
  • VR Head Comparison
    Bogen/Manfrotto 303 SPH (left) vs. Nodal Ninja 5 (right):


    Footprint of Manfrotto 303 SPH compared to Nodal Ninja 5:



    Back to top
    Was this helpful?
  • Misc. Gear
  • Do I need to buy an expensive, professional light meter? Or colorimeter?
    Light meters made by companies like Sekonic are really nice to have and used to be required for most professional photographers.
    However, today you can take a digital photo and instantly view a preview of the image on your view screen, but more importantly, you can also view the histogram of that image (on most mid to higher end digital cameras) and easily be able to tell if you have the proper exposure for that scene. You can trust the histogram, you cannot always judge proper exposure of your view screen if you are viewing it in bright sunlight for example. Another “trick” is to put your camera on aperture priority (I generally shoot F8 or F11 on all HDRs and panoramas because that is the sharpest area of most lenses) and just take a picture. View the histogram and if the exposure is good, then note what the shutter speed and F-stop were for that image. Then switch the camera to fully manual, put in the proper shutter speed and F-stop and start shooting!

    Back to top
    Was this helpful?
  • Cable Releases
    Any of them will do, but use them! The less you physically touch your camera, the better your odds of getting a good HDR and/or panorama. 12” to 24” is good for length.

    Back to top
    Was this helpful?
  • Camera controllers: iPhone / Promote / Nintendo DS
    Which is better? The Nintendo DS camera controller from Panocamera or the Promote Control or the iPhone App from OnOne software?

    First, as cool and fun as the iPhone App sounds, keep in mind that it does not work wirelessly to your camera! It works wirelessly to your TETHERED laptop that is connected (hard-wired) to your camera. So it is not as great as we would all want at this point. But sure is a good excuse to buy an iPhone! ;)

    The Promote Control is well designed and built, and works with Canon and Nikon cameras. They did a smart thing by using the two cables to connect to the camera. Somehow this feature allows you to then take faster shutter speeds (but not all) than the Nintendo DS Panocamera controller. It is plain, seems fairly rugged, but the interface is a bit clunky and just does not have enough options for my needs. (Assuming you already have an iPhone or a Nintendo DS, then this device is also MUCH more expensive than the others.)

    My current favorite is the Panocamera DSLR controller for the Nintendo DS. It has been through many upgrades recently and is now a practical tool for shooting HDRs. It is very inexpensive to purchase, has a lot of features, including the ability to do astrophotography, sound-based photography, and much more. It works with the touch screen so you can interactively adjust your primary exposure, turn on/off mirror lock up, safety shots, and more. My biggest complaint (against Canon really) is that because this software is using a backdoor to control the camera (Canon and Nikon are generally not fond of sharing the proprietary software controls to the public) that you are limited to shooting ALL your shots at 1/20 of a second or slower. Another minor point, is that the glare on the Nintendo DS screens when shooting outdoors can sometimes make it difficult to see. (BTW – Steve Chapman also has another new and improved version of this software that works with a Viliv S5 “Walking PC”. Click here to view his blog post for that.)

    Finally, as cool and awesome as all these little hacks and nerd devices are… I would really just prefer that Canon and Nikon just step up to the plate and let us have a few more HDR-related options like AEB with 5 or 7 shots! How hard can it be? Especially when they already have a few cameras with these options. Don’t they realize almost no one is going to spend $3000 on a bigger, fancier camera JUST to get AEB of 5 or 7 shots. That is just absurd. (Sorry Canon, I love you, but this is ridiculous.)

    Back to top
    Was this helpful?
  • Where to get these camera controllers?
    Steve Chapman's Nintendo DS Camera Controller

    Promote Controller (for Nikon and Canon cameras)

    OnOne software: DSLR Camera Remote for iPhone and Ipod Touch

    Do-it-yourself camera controllers:

    The Canon EOS Timelapse Remote

    Intervaluino: Canon intervolometer

    HDR photography with Arduino

    Back to top
    Was this helpful?
  • Software
  • Where can I find out more about HDRI Encoding methods?
    Greg Ward has a great technical document about this. Please click HERE to download his PDF file.

    Back to top
    Was this helpful?
  • Does the color space that I have my camera set to effect how my RAW files are saved?
    Unfortunately, it depends on who you ask and what camera manufacturer you are talking to. (I did a lot of research and talking with many people about this and was not able to get a consistent answer on this topic.)

    Certainly if you are shooting and saving your files as JPGs, using the Adobe RGB color space is much better than using the sRGB color space. (Side note – if your camera will allow you to work in the ProPhoto RGB color space, then that is even better… more colors!) Some people I spoke with said “Raw is Raw” and it does not matter what color space because you because when you save out your file to a Tiff or Jpeg (for example) you can choose which color space to embed into your file. However, other tech support people told me that which color space you use DOES HAVE AN EFFECT on how your colors are saved in a RAW file format. This may vary between camera manufacturers. My advice is to “be safe” and go ahead and set your camera to Adobe RGB by default…. Or even ProPhoto RGB which is generally only an option on higher end cameras.

    Back to top
    Was this helpful?
  • JPEG versus RAW:
    This is a big one to tackle… but here goes.

    JPEGs are:

    • Faster to shoot
    • Faster to Save
    • Smaller in file size, consequently you can save many more panos/images on a drive.
    • Can be opened/edited in just about any graphics program
    • Cross-platform
    • Little or no learning curve
    • Anyone or almost any camera can shoot JPGs
    • 8-bit files (only 256 color per channel)
    • Limited ability to color correct
    • Generally, you need to bracket in ONE f-stop increments.
    • Very limited in overall detail, but especially in highlight and shadow details
    • (in other words darks go black quickly and highlights go completely white quickly)
    • Better for shooting "journalistic" or fast moving/changing scenes, especially with people in them.

    RAW files are:

    • Slower to shoot
    • Slower to save (More bits = bigger file size = more hard drive space needed = less images can be saved to a hard drive)
    • Larger in file size (14-bit RAWs are about 25% larger than 12-bit RAWs)
    • 12-bit files have 4096 colors per channel and 14-bit files have 16,384!
    • Are "proprietary" per camera manufacturer and even within a single manufacturer
    • (i.e.: Canon) they have multiple RAW formats (i.e.: .CR2 and .CRW)
    • Generally, you need to bracket in TWO f-stop increments.
    • Much more exposure latitude, consequently fewer photos to cover a large exposure range. (i.e. 3 RAW files can cover a range that 5 or 6 jpegs would be needed to cover) - This also means less bracketing!
    • Much more ability to change color temperature after its been shot
    • Have to be "processed" in a digital darkroom (i.e.: Adobe Lightroom) and then saved as secondary file such as Tiff, JPG, EXR, etc.
    • Are Non-destructive! At this point, RAW files cannot be saved. Therefore you can always go back to the original and make additional corrections.
    • Are naturally a bit "soft" looking. RAW files are INTENDED to be sharpened, so some photographers have a hard time switching for that reason alone.

    • Personally, I always shoot RAW. Once you go RAW, you never want to go back to the “old” way. (Assuming you know how to properly process them)

      Christian Bloch disagrees with this statement. Here is his comment:
      “Shooting 9x9 = 81 images for a full panorama is significantly easier in JPEG. Filesize in RAW would go through the roof, I had the extra step of development involved (doubling filesize on disk again), and my camera would also slow down to a crawl as soon as the burst buffer is full. With JPEG I can shoot a full spherical with at least 5 frames a second, so the whole thing takes only about a minute.”

      Back to top
      Was this helpful?
  • RAW vs. Adobe’s Digital Negative (.dng) format
    There are currently about 50 “flavors” of the RAW file format, which makes it very difficult for software companies to be able to support them all. Eventually, only a few of the big players (i.e.: Canon, Nikon, Fuji) may be left… then older files may become obsolete and cannot be opened in future applications. Adobe wants to keep this from happening, or at least minimize this effect, so they have created an open source file format called the Digital Negative (.dng). The idea is that if you save or convert your RAW file (or any other file for that matter) into a DNG File, that 50 years from now, you will still be able to open and work with that file. Overall, it is a good logical idea. However, because camera manufacturers are not currently allowing photographers to save directly into the DNG format (as opposed to RAW or JPG), then converting all your RAW files to a DNG file adds a few steps to the overall process and workflow. Moreover, if you tell the DNG file to preserve all original RAW data, then basically your DNG file is 2X larger than the original RAW file size. At this point, that is impractical for many reasons. Eventually, it will be ideal for us all to be working, shooting, and saving our files in Adobe’s DNG file format, but I do not believe the time is right (yet!) to convert all your files to DNG. I do hope that day will come eventually.
    Back to top
    Was this helpful?
  • Why are there so many versions of the RAW file format?
    Most RAW files are basically the same in that they capture a lot of data per channel, but their header files, compression algorithms, etc. are different and are “trade secrets”. This data is considered proprietary and makes each manufacturer’s product “unique”. Visually, they all look about the same, but some may be bigger or smaller in file size due to how the data in saved and compressed within each file.
    Back to top
    Was this helpful?
  • Bit Depth
  • In terms of colors, what is the difference between 8-bit, 12-bit, 14-bit, and 16-bit files?

    • 8-bit files: 256 colors (shades of gray) per RGB channel
      • 12-bit files: 4,096 colors (shades of gray) per RGB channel
      • 14-bit files: 16,384 (shades of gray) per RGB channel
      • 16-bit files: 65,536 colors (shades of gray) per RGB channel

    In terms of practical application, having more bits per channel basically means more subtle/smoother color variations, more highlight, and more shadow details are available.

    Back to top
    Was this helpful?
  • Is there really much difference between 12-bit RAW images and 14- bit RAW images?
    YES, YES, YES! The more colors per RGB channel, the more lighting/shadowing/texture information that will be recorded. Colors have smoother tonal transitions (i.e.: in a bright blue sky) and more highlight and shadow details will be recorded. You cannot see it or use it, if it is not there! More and more camera manufacturers are switching to 14-bit CCDs every day. Basically it boils down to the more information you have, the more creative freedom you will have on the tail end to do what you want or need to do. More is definitely better in this case! Technically, 12-bit files have 4096 shades of gray, per RGB channel, and 14-bit files have 16,384!!!

    Another thing you should know about 14-bit files though, is that they are generally about 25% larger than 12-bit files. However, since the price of hard drive space is dropping quickly, this is not really much of an issue.


    Back to top
    Was this helpful?
  • Are there any cameras that shoot in 16-bits?
    YES! But they are very expensive. Depending on your needs, it may be worth it? Hasselblad makes several medium format digital cameras that capture in 16-bits, they’re newest one is the H4D that captures at an amazing 60 megapixels!
    (I do accept donations!) ;)
    Back to top
    Was this helpful?
  • HDR Shooting
  • Are f-stops and EV’s the same?
    Well, it depends on your point of view, but technically they are different.

    “f-stops” are specific to the moveable blades that open and close within a lens. When you open or close 1 f-stop, you are either halving or doubling the amount of light hitting the inside of the camera. When adjusting shutter speeds, the same is true in that you when you go from 1/250 to 1/125 or 1/500 you are doubling or halving the amount of light. Therefore, people also call these “f-stops”, but technically that is not correct. The proper term is “Exposure Values” or “EV’s”. EV’s can then be used to describe changes in your aperture or your shutter speeds. Most digital still and even video cameras have setting to adjust or bracket your “EVs” and does not (generally) say anything about bracketing f-stops.

    Back to top
    Was this helpful?
  • Are ISO and ASA the same?
    Actually not. However, over time they are being used interchangeably.
    “ISO” stands for International Organization for Standardization and “ASA” stands for the American Standards Association. In the USA, film speeds are in the ranges of 50ASA to 3200ASA, with 100 and 400 ASA being the most commonly used film speeds. However, the equivalent numbers in ISO are 18° to 36°. So the potential problem/confusion is that many digital cameras will ask you to enter an ISO rating of 400! Just be clear that the higher the number, the more sensitive the film (or CCD) will be. However, this always comes at the cost of higher film grain or digital noise (gain).

    Side note – ASA changed names to ANSI – American National Standards Institute – But film still goes by ASA.

    Back to top
    Was this helpful?
  • Why do they say that you should shoot your images at the slowest (lowest) ASA possible?
    (Canon cameras are generally 100asa and Nikon cameras are generally 200asa)

    As soon as you increase the sensitivity of your CCD sensor (by going from 100asa to even 125asa), you immediately start getting noise in your images!

    Do some tests yourself… just change your asa from 100 to 200 to 400 to 1000 and take the same photo of the same location (tripod probably needed if you want to be exact). Properly expose each of the scenes. Then in your image editing program of choice, zoom way in and look closely at the edges of various objects. You will see various levels of compression artifacts along the edges of the objects in your scene.

    Back to top
    Was this helpful?
  • How important is it to get the color balance right before shooting?
    If shooting JPEGs, it is pretty much critical. If shooting in RAW, then you have more latitude to “fix it in post”. At the same time, if you have a perfect color balance (because you used your custom white balance feature in your camera) then it makes like much easier in the post-processing stages. The less steps the better in a real production environment. Time is money, right? If you can spend a few extra minutes up front, you can save a lot of time in the post-production aspect of creating HDRI files.

    Back to top
    Was this helpful?
  • How many f-stops / EVs apart do I need to shoot for creating HDRs?
    Commonly, most people shoot in 2 EV increments when shooting Raw and shoot 1 EV increments when shooting in the JPEG format. If you shoot in 1 EV brackets, then you will probably have less noise overall since the shots are closer together. However, this also means that if you shoot 1EV brackets, you will need to shoot many more photos, take up more disk space and time, and the more shots you take, the more the odds are that you may end up with ghosting artifacts when shooting in changing environments. See the next question as well for further related information.

    SIDENOTE – I asked Paul Debevec about this and he said they always shoot in 2EV brackets using RAW. If that is good enough for him, that it is certainly good enough for me!!!

    Back to top
    Was this helpful?
  • How many shots do I need to make an HDR (Radiance) file?
    Technically just one, but it will obviously not provide you much lighting information. You can shoot a JPEG, then (in Photoshop for example) do a “Save as” an .hdr Radiance file. However, if your original file is only 8 bits, then saving it at a 16-bit Tiff or 32-bit .hdr file isn’t going to do you much good. Now, having said that, if your single photo is a 12-bit or 14-bit RAW file, then now you may be onto something? Some tonemapping applications (i.e.: Photomatix Pro) will allow you to open a single image as a “faux” HDR file… then you can tonemap this single image and recover highlight and shadow detail that was previously blown out in the 8- bit version of the file you may have previously saved. Tonemapping a single frame can add drama and detail to an otherwise, boring photograph. Try it out for yourself. This is a great technique for locations that have fast changing events or movements taking place, especially with people.

    However, I generally advise that the more shots you input to create the .hdr file with, the better, and more accurate your results will be. Sometimes 3 shots are plenty for a usable .hdr file. Other times, you may need 6 or even 9 shots to get a fully dynamic .hdr file. Ultimately, it depends on your project needs.

    Typically, if you shoot JPGs, you will need to bracket in one f-stop increments. If you shoot RAW, you will generally shoot in two f-stop increments.

    Back to top
    Was this helpful?
  • Are there better or worse times to shoot HDRs?
    Yes, definitely. Shooting at noon on a bright, sunny, cloudless day is not ideal. If you capture an HDR under these circumstances, you will probably get overly blue lighting information onto your 3D objects. In this case, you may be better off just lighting your scene manually.

    Another thing you can do is if the sun is directly in the shot (sometimes you may want it there!) but if not, you can often block the direct light from the sun by being in the shadow of a tree branch, light post, or building. This helps to minimize lens flares also.

    If you do shoot a bright sun directly, then position your head so that the sun hit directly in the center of your viewfinder, this will help minimize lens flares or partial suns in multiple frames, which can be challenging to blend or correct.

    Another potential problem arises when shooting in mixed lighting conditions. Often you may shoot an interior office or home that is primarily lit with tungsten light. But many homes and offices also have a window that let’s in natural daylight. If you expose for tungsten, your day lit areas are very blue. If you adjust your color temperature for the daylight, then the areas lit by tungsten are extremely yellow.

    The (fairly) simple solution is to make sure you shoot in the RAW format. Later you can process each shot/panorama for each of the two color temperatures and can them just composite them in an image-editing application like Adobe Photoshop.

    Back to top
    Was this helpful?
  • Can I take my RAW file, process it at 3 different “virtual” exposures, export 3 Tiff files, then generate an HDR file from those three TIFFs?
    That is a fairly common misconception. When you export your data from 12-bits down to 8-bits, you are losing/throwing out information. So each of the three TIFFs would have considerably less information. If you are really pressed to make an HDR, you will get better results from just converting your RAW file directly into a .HDR file. Once you have that, you can then tonemap it and get “more” information” visually from your original image.

    Back to top
    Was this helpful?
  • Flares and Glares
  • Lens Flares
    Lens flares are affected by many things, but after a little testing, I found that lens flares DO get smaller in size if you shoot with a smaller aperture (ie: f22) which is the good news. The bad news is that this also causes the sun (assuming it is in the shot) to have very hard, distinct lines radiating from it. Certainly this may be artistically pleasing in some cases, but in others, it would create much more work in the post-processing stage to digitally remove them in Photoshop, for example.

    Similarly, the focal length of the lens has a significant impact on the size of the lens flare. Wide-angle and fisheye lenses have smaller lens flares than telephoto lenses.



    Note – based on these images, you might be inclined to use f2.8, but remember that will give you a VERY shallow depth of field, which is generally not ideal for most panoramas.

    Back to top
    Was this helpful?
  • Veiling Glare
    Veiling glare is stray or scattered light that bounces around inside your lens and even inside your eye. The appearance of it is generally an evenly distributed “Whiteness” or light haze or fog-looking effect that reduces contrast of your images. This optical problem is made worse by the creation of the HDRI process, using multiple images each with its own level of veiling glare. Every image will have some amount of veiling glare, it is a fundamental optics problem.

    One of the major issues of most tonemappers is that they tend to reduce the contrast so much that they become unrealistic in appearance. Currently, only one “next gen” HDR application will correct for veiling glare. Unified Color’s HDR Expose easily corrects for veiling glare and you can even add more VG if you wish! This is a feature that we have never had before, but now I cannot imagine NOT using it every time! They have some example images and even a video tutorial on how to correct for veiling glare.

    For too many years, tonemapping applications have only given us a few controls (except for maybe Photomatix?). HDR Expose, as well as Max HDR are trying to break the mold and add many new features, and workflows for professional artists and photographers to use. Yeah!

    Sidenote – Currently, Greg Ward’s Photosphere is the only app that has an option for removing lens flares.

    Back to top
    Was this helpful?
  • Is there anything I can do to reduce veiling glare?
    This is determined by the quality of your lenses and optics. Higher quality optics should minimize this glare. Another thing you can easily do, is to remove your filter(s) from your camera lens. The more pieces of glass that light has to go through, the more likely you will have veiling glare issues.

    Back to top
    Was this helpful?
  • Don’t forget to clean you lens before shooting
    When you shoot HDRs, the dust will generally be accentuated when creating and HDR file. Please see the image below:


    Back to top
    Was this helpful?
  • Shooting Panos
  • Can you successfully handhold and shoot a spherical HDR?
    Yes, but I don’t advise it, especially for newer panoramic photographers. You need a very steady hand, a fisheye lens, a fast burst rate, and the ability to maintain your nodal point while positioning your body all around the scene. It can be difficult to NOT get your toes in the shot when shooting with a fisheye lens. It will also end up costing you more work for stitching and image clean up, but it is definitely possible.

    Shoot as fast as you can if you want to even attempt this.

    Back to top
    Was this helpful?
  • How do I get good and fast at shooting panoramic HDRs?
    Practice! It takes a lot of practice to get fast and efficient at shooting and stitching them. Another bit of advice I can suggest is that once you starting shooting your own panos and HDRs, be consistent and methodical with your approach. Maybe you always start shooting north? Maybe you always start with the “beauty shot” of the scene? I also suggest always shooting clockwise or counter-clockwise (it doesn’t matter which, but choose one) I generally take the up and down shots last because usually the important parts of the photos (content-wise) are generally in the horizontal areas of the panoramas.

    However, I need to mention that some panographers always start by pointing straight at the sun. This minimizes lens flares and after completing the circle, you can easily remember where to stop. Others start with the down shots, before the stomp all over the grass, vegetation, or sand, to minimize retouching later in post.

    Back to top
    Was this helpful?
  • How many shots do I need to shoot a full spherical panorama?
    It depends. It depends primarily on what focal length lens you use, the wider the lens, the less shots it will take overall to complete a sphere. However, it also depends (somewhat) on what stitching software you will be using as well. In the early days of panoramic stitching, often a 50% overlap of images was needed. Today, generally you only need 10% to 20% overlap to get a good stitch.

    If you need to be able to visualize how many shots will make up a full sphere, you can set up a series of cards (with the same aspect ratio as your CCD) in Maya, 3DS Max, Lightwave, etc… then just duplicate and rotate your card until all the cards fully cover your sphere with a little bit (about 10%?) of overlap. (A good place to see how this done is on Greg Downing’s Gnomon DVD.)

    Or you can use the Panoramic Calculator from the tools section on this website.

    Example 1:

    If you used a Canon 30D with the Canon EF-S 10-22mm lens (shooting at 10mm) you will need to take 15 photographs for a panorama. (For HDR, you will need to take each of these 15 shots at three or more exposures each.) You will take ONE photograph up at the Zenith, you will take TWO photographs down towards the Nadir (rotate 90-degrees from your first down shot) and then you will take two “rows” of images, with each “row” being made up of 6 vertical photographs. The lower row of images will be tilted down (about -15 degrees from horizon) so that you just barely miss getting any of the tripod legs in any of the six shots from that row. For the other row of image, you will need to tilt the camera up at 45-degrees above the horizon. This will give you plenty of overlap at the Zenith. (so you may or may not need the zenith shot, but its better and safer to go ahead and shoot an extra straight up ONLY zenith shot just to be safe.)

    Example 2:

    If you shoot with a Nikon D200 and a 10.5mm fisheye lens, you can just take 6 images horizontally and 1 up for the zenith, and 1 down for the nadir.
    Only 8 shots for a full panorama!

    When shooting panoramas, you will often hear the terms “row” and “columns” mentioned. Columns, like in a book or newspaper article, represent how many shots you will need to take to make a complete circle horizontally. Rows are like horizontal slices of a sliced orange for example. In the above example, there were two rows + the zenith and nadir images taken. However, with a slightly less wide angle lens, a Nikkor 12mm lens for example, you will need to take 8 shots to get a complete row. Other lenses may not have as much vertical field of view, so you may need to shoot 3 rows and 8 columns (+ nadir + zenith) to get a complete spherical panorama.

    I should also mention that you can (sometimes, depending on overall light levels) handhold a nadir shot that does not have the tripod in it. Shoot it, then composite it back into the other images to remove the tripod and head. This method is much more difficult, or even impossible, it low light situations. Another alternative is to move the camera over slightly from where you shot, then tilt the tripod up on two legs and reposition the lens (by sliding the VR head plates off center as far as it may go horizontally) as close as you can over the nadir and then shoot. Some tripods have a center column that can pull up and swing out so you can more easily take a downward shot.)

    Back to top
    Was this helpful?
  • Do I really need to shoot directly up or down at the Nadir or Zenith?
    Not always, but it is a good habit to get into when shooting, especially if you have the time and hard drive space. Depending on what lens you are shooting with and how many shots, you may or may not need a shot that is directly pointed straight up. In a sky for example, it is easy to fix and blend blue sky or clouds, so if there is a small hole at the top of your sphere, then it’s easy to correct. However, if you are shooting an architectural interior and you are shooting directly underneath a chandelier (for example) then it is MUCH better to shoot the up shot and just force the stitcher (via masking methods) to use the single image of the chandelier instead of 6 or 8 partial images of the same chandelier.

    The same is true for the down shot(s). If you are shooting on top of a patch of concrete, grass, or carpet that it is all relative the same, then that is easy to fix and retouch in Photoshop or similar program. But if you are shooting on top of a Persian rug with intricate detail, then I strongly advise you to take the down shot(s).


    (Christian Bloch shooting a nadir shot.)

    NOTE – Using the handheld tilting method to capture a Nadir shot is only practical if you are shooting with a lot of available light, which equates to very fast exposures.

    However, if you are shooting in mid to low-level lighting situations, then (unless you are a robot) you will not be able to hold the camera still long enough to avoid blurring/moving your shots while you take a 30 second exposure.

    Then the answer is to use a tripod like the one below that has an adjustable center column. As you can see my Nodal Ninja 5 head is still attached to the tripod’s center column. After rotating the center column, you then rotate the head so that it is pointing directly downwards before taking your various exposures.

    (Bogen / Manfrotto 055 CXPRO 4 Tripod with tilting center column)

    Why do I say down shot or down shots?

    If you shoot one downshot, that is good. But if you shoot TWO downshots, each shot 90-degrees to each other (North/South and then East/West) then you will be able to remove even more of your tripod and plates from the bottom of your spherical image. Most spherical stitching programs like Realviz Stitcher and PTgui Pro offer the ability to do masking. Then you can basically use a Boolean function to remove the differences between your two down shots. Then you only have to retouch a very, very small portion of your down view in post.

    Some spherical stitchers allow you to “stitch-in” a photograph (PTgui Pro) that has been shot off-camera and off center. That is good if you have plenty of available light, but can be near impossible in low light situations where a tripod is required to get a proper exposure of the downward information. If you are shooting 9 exposures per view, then it will also be nearly impossible for you to handhold that many exposures and keep the camera perfectly still and in the same position.

    Back to top
    Was this helpful?
  • Common mistakes / Practical shooting advice:
    When shooting tons of HDR images, especially panoramas, it is easy to get confused on whether or not you have shot something. If you ever find yourself unsure, then SHOOT IT AGAIN! As they used to say “Film is cheap”. Having a few extra shots is not really a big deal, but if you are missing a few shots in a spherical panorama, it can easily end your grand creation.
    NOTE – There is one potential drawback to this. Later when you create your HDR files, you can unintentionally import two of the same shots or same exposures. This can produce errors in most hdr applications.


    I have another habit that I recommend: Often you will shoot many panos and shots of similar locations. When you look at the files in an image browser like Adobe Bridge, it can be very challenging to figure out which images go to which panorama. My easy solution is to always just take a quick picture of your hand (or gray card) so that you can easily organize, group, and then rename your files correctly.

    Another important aspect of shooting panos is to just be consistent with your shooting methodologies. Always shoot clockwise or counter-clockwise. Always start by shooting towards the sun OR always start with the “beauty shot” of the pano. I define the “beauty shot: as your favorite portion of a panorama that you are shooting, usually something in particular drew your attention to shoot that location.

    Back to top
    Was this helpful?
  • Any other practical tips for panoshoots?
    Shooting with a fisheye? Watch your feet! At 180-degrees, it is much easier to get your feet in the shot than with any other lens. I have been surprised many, many times!


    Have a friend or assistant with you while shooting a panorama? Need to ask someone to move for you to be able to complete you pano? Besides asking them to leave completely (rude!?), then the easy solution is to just ask them to always stand behind you as you shoot. Then they will always be out of the shot and you don’t have to keep asking them to move.

    Another thing you can/should do in your camera for shooting panoramas is to TURN OFF your auto-rotate feature in your camera’s software controls. Often when shoot spherical panoramas, the software gets confused on which way is up. Even more importantly, when you are viewing/analyzing your shots on your LCD screen on the back of your camera, if the shot is rotated to a horizontal position, you are reducing your ability to view the file by about 50% since it is no longer “Full frame” as it is when you shoot a normal horizontal image. Seeing MORE of your image is better than seeing less of it!

    When shooting many HDRs or panoramas of various locations, be sure to keep your eye on the exposures/histogram on the LCD screen of your camera. When you are moving around a lot with your gear, it is very easy to accidently turn a dial or knob. You also might forget to turn on/off your auto-bracketing options. If you get lazy and don’t look, you may find some unexpected surprises later.

    Hardware breaks over time… or at least it is for some screws to come loose on your tripod or head. Keeping travel restrictions in mind, I highly advise keeping a Leatherman, Swiss Army knife, or at least a screwdriver with you when out shooting. Your gear will generally break down at the worst possible times at locations you cannot reshoot later. Be prepared!


    This seems like something very obvious, but if you are busy concentrating on your camera or the location, it can be easily overlooked. Always pay attention to the sun’s (or other main light in your scene) position. Position yourself to minimize your shadow. This will save you a lot of post-production retouching later.


    Back to top
    Was this helpful?
  • Parallax – How important is it really?
    VERY! (Well, most of the time it is very important) If you have parallax, most stitching programs will be unable to stitch your images. Calibrating and testing your camera/head/lens setup are worth your time. Moreover, once you find the sweet spot (proper nodal point) you never have to find it again, so it is a one-time setup.

    However, it also depends on how close the subjects you are photographing are in relation to where you are shooting from. If you are shooting in the middle of a flat field in Kansas (USA), then parallax may not be as important. However, if you are photographing a small room filled with bookshelves (i.e.: a lot of vertical and horizontal straight lines) then it will be absolutely critical to have the correct parallax/nodal point.

    Side note - Below, you can see that I have successfully stitched a handheld multi-image panorama shot from the Seattle Space Needle, where the nodal point was off by nearly 40 feet!


    The wider the lens, the more challenging finding the correct nodal point can be. This is especially true of fisheye lenses. (The Nikkor 10.5mm fisheye lens has the nodal point marked with a nice gold ring around the lens, so setup with that lens is easy!) It would be nice if all lens manufacturers did that!

    Back to top
    Was this helpful?
  • Where can I find out more about Gigapixel panoramas?
  • Stitching Applications
  • What are some of the applications that will stitch panoramas?
    Autodesk Stitcher
    Adobe Photoshop (Cylinders only)
    EasyPano (Flash)
    AutoPano & Giga
    VR Worx (Cylinders only)

    Back to top
    Was this helpful?
  • Can you really use Photoshop to stitch panoramas?
    Think you have to use the high-end stitching programs for all your panos? I used to think the “photomerge” tool in Photoshop was just a toy for amateurs. Recently I tried to stitch this handheld panorama in many various stitching programs, and the only one that worked was Photoshop!

    NOTE – It cannot currently stitch spherical panoramas, but for cylindrical panos, it works pretty good.


    Back to top
    Was this helpful?
  • Are there high-end hdr methods/systems for easily creating all this HDR stuff?
    Of course! If you want an automated system and your production time is critical or very short, then please check out the amazing HDR cameras at Spheron, Panoscan and Civetta!

    The one that tops them all is the HDR-CAM, which can only be rented.

    Back to top
    Was this helpful?
  • Shooting Chrome Balls/Panos/Shooting HDRs
  • Shooting a Chrome Ball vs. Shooting a multi-image spherical panoramic HDR
    When is it better to shoot a chrome ball? When is it better to shoot a full blown multi-image panoramic HDR?

    Chrome Ball Method:

    • Easy to do – Anyone can do it!
    • Short learning curve
    • Do not need to be very photographically literate
    • Inexpensive (balls are $10 to $25 each)
    • Easier to process and convert to HDR files because it is just a single file
    • Overall low quality* image (*This can be improved if you shoot 2 or 3 additional angles of the same ball)
    • Overall, captures accurate color/light information
    • Not ideal for background plates or reflection maps (unless they will be very out of focus or blurred on purpose)
    • Often have scratches and dents
    • Portable? (Depends on your point of view)

    Multi-image/multi-segmental/Quicktime VR panoramic Method:

    • Potentially challenging learning curve if you are not photographically literate
    • Gear is much more expensive and heavy
    • Exponentially more files are needed, which means more hard drive space, more file management, more retouching, etc….
    • Bigger time/manpower investment
    • High quality results can be used for lighting, reflection maps, and background plates.
      (Side note – You generally only need a very low resolution panoramic HDR file for the actual lighting information. You might need a higher or full res version for the reflections or background plates)
    • Much more "flexible" in that you have a high quality image that you can always make smaller if needed.

    SIDENOTE - There are other methods for capturing/creating spherical HDR images, I have just chosen to discuss the most popular methods. The various methods of shooting, processing, and stitching are covered very extensively in books.

    Back to top
    Was this helpful?
  • Glass Ball vs. Stainless Steel Ball

    Glass balls are:

    • Cheaper
    • Easier to find around town in your local garden center
    • Brighter in reflectivity
    • Generally have a few bumps and imperfections, but this is not critical if you just need light and color information.
    • Come pre-made with a large hole at the bottom, which is perfect for a C-stand, tripod head, etc. to support it.
    • Breakable (so be very careful when handling them)
    • More prone to scratching.
    • Slightly lighter in weight.
    • Tend to have a secondary reflection near the rim do to the fresnel effect.
    • (Personally, I prefer glass because of the brighter and sharper image it produces)

    Stainless Steel balls are:

    • More durable, almost indestructible. (However, they can be dented and scratched also)
    • Less bright/reflective than glass.
    • Generally a little softer reflections due to the tiny scratches on the surface.
    • Can be very difficult to work with if it does not have a pre-cut hole in the bottom.
    • (You can easily ruin a $40 drill bit on a single hole while trying to drill stainless steel.)
    • More expensive than glass.

    Back to top
    Was this helpful?
  • What is the best lens (focal length) to shoot a chrome ball?
    The farther away physically that you are the better because it helps to reduce your reflection in the ball. A zoom lens (i.e.: 70-300mm) is very helpful for these situations. However, there are times where a long telephoto will not help you because you are in a small room. In this case, use the longest focal length you can without cropping into the image of the ball in your viewfinder or viewing screen.

    Back to top
    Was this helpful?
  • Where do I put my chrome ball? How do I set it up properly?
    Ideally, it needs to go exactly where your 3D/CG object will be in your final shot(s). However, sometimes it cannot go exactly there, so get it as close as you can or in a very similar lighting situation. You should shoot perpendicular to your camera. If your ball is 5 feet off the ground, then your camera/tripod should also be 5 feet off the ground. (There are a few exceptions to this rule, but generally always shoot it level if possible) You need to be as far away as you can from the ball as well to minimize your reflection in the ball. Telephoto lenses are very handy in this situation. (i.e.: 70-300mm) You can always retouch yourself out, but a smaller reflection is easier and faster to correct.

    If you only have time to photograph the ball from one angle, then photograph it from the same angle the video or film camera is shooting the plate with. (For example, if the film camera is facing due North, then you’d be shooting the South side of the character or object that would later go into the shot. Therefore, when shooting the ball for HDR purposes, you should also shoot the South side of the ball.)

    Another note – If part of your ball is severely scratched (or dented) then try to place that opposite of the direction you will be shooting from.

    Back to top
    Was this helpful?
  • Can I shoot a colored (i.e.: red or blue) chrome ball?
    No. You will get severely distorted color and lighting information. Just stick to plain old silver balls. (Yes, chrome Christmas ornaments can also be used in a pinch.)

    Also see Pano Conversions

    Back to top
    Was this helpful?
  • What do I do with my Chrome Ball HDR file once I have shot, cropped, and combined all the images into a single HDR file?
    Generally, you will then want to retouch the unwrapped image to remove yourself, the tripod, and maybe some “pinching” that can occur from the content on the backside of the ball. There are tutorials online about how to combine two or three different chrome balls HDRs into one clearer and better one. (But not as good as a full spherical multi-image panorama!)

    Once you have cropped the ball literally right to the edges, then resize it to a perfect square (aspect ratio) then save the file. Once you have that, then you can use a panoramic transformation to turn it into either a vertical cross or a LatLong (spherical) image to retouch it. HDRshop, Autodesk Stitcher, and Flexify 2 can easily do this step for you.

    Back to top
    Was this helpful?
  • What do I need to support my chrome ball while shooting?
    Certainly there are times where you can just place your ball directly on a tabletop for example. But generally, you will need some sort of stand to hold it up. A “C-stand” will work fine, but I recommend getting a photography monopod (i.e.: Bogen/Manfrotto) that has small, internal, pull-out tripod legs, so that it can be self-standing. Yes, you can use a regular tripod, but then you will have the reflections of 3 legs in the bottom of the ball. With a monopod, you only have one major reflection directly on the downside. This is easy to retouch and generally not much light is coming from directly below anyway. Windy days can be challenging, so make sure you ball is weighted down/taped down effectively so it will not move. If it does, you can use the de-ghosting features of HDRshop, Photomatix, etc to help align and correct this problem. It really works well! So use that feature if you are unsure if things have moved.

    Back to top
    Was this helpful?
  • Can I make a better/sharper HDR by shooting my chrome ball from multiple directions?
    Yes! The main drawback to shooting a chrome ball (if only shooting it from one direction) is that the content on the back of the ball can often be “pinched” and very distorted. If you only need lighting information from the ball, then that is not always a problem, but if you want to also use the ball for a background plate or for reflections, then you may need a better version. The solution is to shoot the same ball from multiple directions. Generally only one more direction is needed (ie: 90-degrees from the angle of your first set of images), but if you also shoot 180-degrees and even 270-degrees of it as well, you will ensure a sharper, more crisp image once they have all been processed and combined into one panorama.

    A few things to note about shooting the ball from multiple directions. It is very important that you shoot the ball (for the 2nd, 3rd, and 4th versions) at the exact same height as you shot the first one. Otherwise, the unwrapped spherical images will not line up properly for retouching. If possible, also shoot the extra ball shots at the exact same distance as you shot the first one… or at least as close as you can. Also, if you shot 5 shots to create the first HDR of the ball, then you also need to shoot 5 for the next ones. Similarly, use the exact same exposures for all of them.

    Back to top
    Was this helpful?
  • Can I really take a picture of one side of a chrome ball and see content and lighting information on the opposite side?
    YES! Seriously. Its one of those strange physics things… but suffice it to say that the content on the backside of the ball ends up as tiny reflections on the outer edges of the ball itself. (Hence the reason it is so important to properly crop your chrome ball photographs.)

    When you shoot a chrome ball, you are NOT just capturing 180-degrees of information. You are also capturing the content on the OPPOSITE side of the ball! At first glance, this doesn’t seem possible, but it is. Try taking a photo of a chrome ball and hold out your hand on the opposite side of the ball (opposite of the camera) and take a photo. Once you unwrap the ball, you WILL see content on the side of the ball what was facing away from the camera! It will not be very sharp and clear.. and it may appear “pinched” but its there. More importantly, the lighting information is there! Which is what you really want anyway. Technical explanation

    If you don’t believe me, try it yourself! I didn’t believe it until I did it!

    Click HERE to download a tutorial on shooting/unwrapping a chrome ball for creating quick and easy spherical panoramic HDRs.

    Back to top
    Was this helpful?
  • What is the use of the “gray ball” that I sometimes see on those “behind the scenes” videos?
    Generally the gray (diffuse) ball is used for general lighting information (Think of it as merely a visual guide) such as where are the main lights? Or what is the overall key to fill lighting ratio of this scene? For non-HDR lighting, you can take the captured image of the ball, go into Photoshop, and measure (with the color picker) the overall color and saturation of the light(s). You can then take this information as a general plan on how to do traditional CG lighting. It is useful for being able to tell where the key light is in the scene. Another “trick” that some companies use is to put in a CG ball into a scene. If they can get the CG ball to match the real ball, then they know they are pretty close to matching the overall lighting for that particular scene.

    This is definitely a good thing, but it is not very accurate and will not supply you with any reflections on your CG objects. Reflections can be equally or even more important than the lighting information generated by the HDR.


    Sidenote – You will also sometimes see a white ball used instead of a gray ball. The white ball is generally used/swapped out when the overall light levels are lower in the scene being shot.

    Back to top
    Was this helpful?
  • 3D Panoramas
  • How can I easily make my own 3D spherical panoramic HDRI files?
    Visit the E-on Software Vue9 website and download a demoversion.

    Set up your Vue scenes as desired, go to your render settings and turn on the tick box for 360-degree render and the tick box for spherical render (180-degrees vertical). Make sure you “keep camera level” is turned on and after rendering (but before saving) be SURE TO TURN OFF the tick box for “Natural Film Response”. Otherwise, your highlights and shadows (in the extreme darks and light areas) will get clipped when you save. Images are all rendered out at 32-bits… so once you do a render, just do a “Save as…” choose the HDR format instead of JPG or TIFF and that is all you need to do! Even if you don’t need a full environment, this program will easily create thousands of sky domes with every color, cloud, or detail you might need for your CG projects. It’s easy!

    A 3D panorama created with Vue – by Kirt Witte

    You could also render your own panoramic HDR in your main 3D app. If you're working on a long sequence it can be a useful shortcut to turn complex environments into matte paintings, and map them on a background ball instead. In Lightwave, or example, you would just use the Advanced Camera, set to cylindrical distortion on both axis, crank the horizontal FOV up to 360, and set the frame width to double the height (like 4096 x 2048). Save as EXR, and off you go!

    Back to top
    Was this helpful?
  • Post-Processing
  • When should I use DXO Optics Pro instead of Adobe’s Lightroom? (to process my RAW files)
    I use Lightroom about 98% of the time to process my RAW files. It is fast, it is easy, and it is cross-platform! However, there are times where you just cannot get your images to stitch properly, to generate your spherical images and spherical HDRs… DXO has scientifically analyzed lens defects of the most common cameras and lens combinations and can reverse/correct problems with your lenses that you weren’t even aware you had a problem with! For example, most lenses have a certain amount of barrel or pincushion distortion. DXO Optics Pro can go in and correct your images (including other things like chromatic aberration) and then once you “processed” these files (TIFFs for example) then you can go in and stitch those much more easily and accurately! The drawback is that DXO is a bit clunky and slow to process your images, but the results are definitely worth it! If accuracy is more important that time, then DXO is definitely the way to go.

    Back to top
    Was this helpful?
  • Is it better to create your Radiance (.hdr) files directly from RAW files? Or better to convert them to Tiffs (or something else), then import those to make your radiance files?
    Well, unfortunately, the answer is “… it depends.” From a high level, it would make the most sense to import your 12-bit or 14-bit files and make your .hdrs directly from those. (Some of the “next gen” HDR applications like HDR Photo Studio have their own proprietary working file formats. Therefore, they generally prefer/need those to be created from the original RAW files.) The idea being that you are preserving as much original data as possible, giving you greater tonal range and ability to tweak and adjust your HDR files afterwards. In general, that is true and for most times, most situations, that is the best way to do it.

    However, in a situation where your color balance is critical, then you may need to use a different method. For example, if the color balance of your camera was set incorrectly at capture time, this could cause some problems later on when you really need to fine tune your color balance. Yes, you do have the ability to adjust your color balance somewhat after making your .hdr files. It may also be a case where you are working with files with an extremely broad contrast range, (i.e.: an architectural interior with mixed lighting, but you also have bright windows letting in the daylight as well) this can make it difficult to preserve a specific range of highlights or shadows for example. Personally, I usually process my RAW shots in Lightroom, then save them out as a series of 16-bit TIFFs. I then create my HDR files directly from those. This works for my situation best because I have very critical color needs. Lightroom allows me to really fine tune my color temperatures, adjust for vignetting, adjust for chromatic aberrations and much more. I like and need more controls over my images to get the results I want and need.

    For more information on this topic, please see the sections on workflow methodologies in the following books:

    “The HDRI Handbook” by Christian Bloch
    "Mastering HDR Photography" by Michael Freeman

    Solution: Process your RAW files with a program like DXO Optics Pro, Adobe Lightroom, or Apple’s Aperture program first. These programs allow you to really fine tune your RAW image processing. For example, you can choose to preserve your highlight details and lighten up your shadows a little. So adjust your most “normal” exposure (out of a set of exposures that you will use to create your HDR file from) and save those settings for how you specifically processed that image. Copy and paste those particular settings onto ALL of the exposures you will be generating your HDR from. IT IS CRITICAL, that you apply the exact same settings to all of the images so that you will get reliable, predictable results in your final HDR file. I recommend exporting these files out to 16-bit TIFF files (the idea being to preserve as many tonal values as possible in each channel.) Then take your series of “processed” 16-bit TIFF files and generate your final HDR file from those. (Instead of using the original RAW files only).

    Back to top
    Was this helpful?
  • Can I import my RAW files directly into a spherical stitching program and just make my HDRs from those original files?
    Technically, that’s possible, but it isn’t necessarily the best idea. PTGui Pro and Autopano Pro support stiching directly from RAW files. Due to the large number of “RAW flavors” out there they support RAW by using dcraw, David Coffin’s public domain RAW engine. The same engine, that also drives RAW support in Photomatix, Picturenaut, and a million other programs.

    However, the quality of the conversion will always be behind other, dedicated RAW programs. Lightroom, Aperture, DxO Optix Pro and Bibble are best suited for this, batch-processing massive amounts images with a rich correction toolset are what sells them. And the workflow that goes with. Looking at the purely technical quality, the software that somes with your camera is the only one that delivers a perfect conversion. Because the manufacturer is the only one who really knows how much it can get out of a RAW file, everyone else has that just reverse-enineered. But Canon’s and Nikon’s software is usually a desaster workflow-wise, so you’d better stay with Lightroom et al.

    Try to see this first round of processing as a chance to clean up the input images. If you can get rid of vignetting and chromatic abberrations at this early stage, you won’t have to worry about them anywhere in the stitching stage.
    (also read the article above)

    Back to top
    Was this helpful?
  • Is there a way I can find out what the dynamic range of my HDR really is?
    Photomatix Pro has an option where you can view a 32-bit histogram of your HDR file. One of the numeric results it lists is an (estimated!) dynamic range of your file. It’s not perfect, but gives a rough approximation of the brightness/contrast range of your HDR file.

    In Picturenaut, you'll see the estimated EV span in the lower left corner of the image.


    Back to top
    Was this helpful?
  • Tonemapping
  • Is there another name for tonemapping?
    Yes! Some argue that Dynamic Range Mapping is a more appropriate term.

    Back to top
    Was this helpful?
  • What is a “baked” HDR image?
    This is usually a derogatory name for an overly-processed/toned HDR image. Many examples of baked HDRs can be seen on sites like Flickr.com.

    Back to top
    Was this helpful?
  • Tonemappers? Are the all the same?
    Definitely NOT! Some are very simple, rude, and crude. Some are very complex and offer many, many variations and choices. There is no perfect tonemapper for every situation or image. Different methods are needed for different lighting situations. Ultimately, tonemapping is just a matter of personal preference on whether or not the viewer/photographer thinks it looks correct or not. Be careful that many tonemappers “suck the life” out of your images. By that I mean that they can easily reduce the contrast so much that it looks very unnatural. An easy method to correct for this after you tonemap the image, go to Photoshop (for example) and reset your black and white points of the image. This will create a more realistic, natural-looking image.

    Another thing to really watch out for are dark or light “halos” around areas of high contrast that have been compressed during the tonemapping process. Generally this creates a very unrealistic image, but some artists like it as it seems “Painterly” to them. As I said, tonemapping is very subjective to personal taste.

    Back to top
    Was this helpful?
  • What does tonemapping really do?
    Tonemapping is really about compressing a 32-bit file down to a believable, generally realistic 8-bit image. With traditional 8-bit photography, often times highlights will be blown out and subtle shadow information will just go black with very few details in the shadow areas. Tonemapping allows you to basically pick and choose which highlights, midtones, and shadows that you want to use. You take those preferred areas and compress it down into an 8-bit file (it could be compressed to 16-bits also) into an image that looks “real” to the viewer, but with traditional digital photography, it wouldn’t be possible. The human eye adjusts exposure on the fly in our brains and old traditional painters also painted in scenes and exposures the way they “see” them in real life as well. Tonemapping allows us to more accurately reproduce how our brain views a scene.

    Again, let me clarify that tonemapping is very subjective. Many artists/photographers create very stylized and hyper-real images that are “artistically” acceptable, but not necessarily believable. Often times they end up looking like poorly composited/blended Photoshop projects. I think the “challenge” is to make tonemapped images look believable.
    Back to top
    Was this helpful?
  • Local vs. Global tonemapping? What are the differences and why should I care?
    Global tonemapping applies the same set of color changes, compression, etc equally across the image. This method is generally faster and works often with HDRs that to not have an extremely dynamic range within the image.

    Local tonemappers initially apply a global tonemapping algorithm to the file, but then it analyzes the differences (delta) and if it notices that there are still areas of extreme difference in values, it will then go to each of those local (smaller) areas and further reduce/compress the tonal values within an increasingly smaller area until it reaches a “normal” value of differences. Local tonemappers are generally slower, but offer more ability to really tweak difficult or very high contrast HDRI images.

    Some programs like Photoshop for example, allow you to manually tonemap with a custom curve function. This can be acceptable for an advanced user, but is not very user friendly to artists that are new to HDRi and tonemapping.

    Some freeware and shareware tonemappers offer 2 or 3 methods for tonemapping. Other programs like Photomatix Pro offer many more choices than that.

    Back to top
    Was this helpful?
  • Are tonemapped photos and HDRs the same thing?
    Definitely not! Tonemapping is really just a small subset of what HDR files are all about and how they are used. Unfortunately, because of some of the bad tonemapping examples posted on popular internet websites like Flickr.com, many people are mistakenly being misled about the benefits of high dynamic range imaging. They may see many of the poorly tonemapped images and think, “Why would I want to use or learn HDR if that is what the result will be?”
    Back to top
    Was this helpful?
  • Can I tonemap a single image?
    With a few applications (i.e.: Photomatix Pro) you can convert your single RAW file into a “feaux” HDR file. (14-bits would be better than 12-bits in this case, but 12-bits will certainly work) Once you have that, you can tonemap it as it were created from multiple images. This can be very useful for pulling/saving/showing the highlights in clouds or to show more shadow details in a journalistic photograph of a person or fast moving scene where shooting with a tripod is not possible or practical.
    Back to top
    Was this helpful?
  • What programs or utilities will do tonemapping?
  • Panoramic Conversions: how, when, why?
    Many HDR images on the internet are in the Light Probe format (They look like a picture of a chrome ball). Most 3D programs cannot use HDR files in this particular shape. In general, most 3D programs need what is called a LatLong File (also called equirectangular… think of a map of the earth that has been flattened out into a 2:1 aspect ratio. Why 2:1? 360-degrees horizontal by 180-degrees vertically gives you 2:1 )
    Autodesk Stitcher has a few built in transformation options.
    HDRshop and Photomatix have a few built in transformation options.
    The “Flexify 2” Photoshop plug-in has more than 100!

    Back to top
    Was this helpful?
  • What are some applications that do panoramic conversions?

    Back to top
    Was this helpful?
  • Where can I find panoramic photographers?
    International VR Photography Association

    International Association of Panoramic Photographers

    You may find some on the official Adobe approved photographers list as well.

    Jook Leung is one of my favorite panoramic photographers and he is very familiar with the needs of HDRi projects.

    There are many panoramic photographers located via Hans Nyberg’s website: www.panoramas.dk

    Back to top
    Was this helpful?
  • Stock HDRs
    There are increasingly more and more stock HDRs on the internet, but for the last few years, many people keep re-using the same ten or so free HDRs that they can find.

    To make your own CG HDR domes, try Vue.

    For photographic HDRs and background plates: (especially automotive)
    Back to top
    Was this helpful?
  • Where can I learn to do my own panoramic photography?
    See Greg Downing’s DVD tutorials on Gnomon

    Read Mastering Digital Panoramic Photography by Harald Woeste
    This is a great book that covers many different methods and various stitching applications.

    Better yet, come take my “High Dynamic Range Imaging” class at the Savannah College of Art and Design, in Savannah, GA, USA

    Back to top
    Was this helpful?
  • Where can I find a thorough list of panoramic and HDR related tools, hardware, and utilities?
  • Where can I purchase a chrome ball?
    Generally, you can find a chrome ball at your local mega-shopping store in the garden center, but it may depend on the time of year as well. However, here are two locations that keep them in stock year round: Fountains N Such and OutdoorDecor.com

    Back to top
    Was this helpful?
  • Is it better to purchase a SpyderCube or a WhiBal Digital Gray Card?
    First, let me say that it depends on how CRITICAL it is that you have perfect color?

    I have found the WhiBal card to be perfectly neutral in color and because it is flat, it is much more portable less likely to be damaged or broken. At the same time, if your color needs are not in the “Perfect” arena, and the color just needs to be close, then the cube works just fine for that. It is really nice also that it has a “black hole” for the deepest blacks, has white, gray, and black areas, and a nice small chrome ball for catching specular highlights and miniature sized reflections. It also has a nice tripod socket for mouthing on 4 inch tripods, or any tripod (1/4” threads) bigger than that. It has a string as well to be able to hang it from objects.

    The SpyderCube certainly has a “cool” factor to it as well. But it is a small plastic cube that is liable to be crushed fairly easily. I am also disappointed that the gray colors are NOT perfectly neutral. They are off by just a few points with the one I own. At the same time, most people probably don’t need to be as exact with their colors as my clients do. I find myself using them both on most shoots now just for the flexibility and choices it gives me to have them both under the same lighting conditions.


    Back to top
    Was this helpful?
  • Where can I get help with HDRI in mental ray?

    Zap’s mental ray tips:

    Los Angeles mental ray Users Group

    My mental ray

    mental images

    djx Blog

    Back to top
    Was this helpful?
  • Ressources

    HDR Labs - (Probably the best single HDR-related resource currently on the internet)

    “The HDRI Handbook” by Christian Bloch

    “Practical HDRI” by Jack Howard

    “Mastering HDR Photography” by Michael Freeman

    Photographic Multishot Techniques http://rockynook.com/books/105.html

    “Color Imaging – Fundamentals and Applications” by Reinhard, Johnson, Khan, and Akyuz

    “High Dynamic Range Imaging” by Reinhard, Ward, Pattanaik, and Debevec

    Paul Debevec’s website:

    Greg Ward’s website:

    Erik Reinhard’s website:

    Kirt Witte’s websites:

    Back to top
    Was this helpful?
  • Misc
  • What about the future of HDRI?
    The future of HDR is looking great!

    Spheron has a new HDR-V (video) camera soon to be released!
    More and more camera manufacturers are going 14-bits.
    There are now “next gen” HDR programs like HDR PhotoStudio that are now available, but have many more never-been-seen before tools and utilities coming.
    Gigapixel panoramas are becoming more common.

    A great way to keep up with the last HDR news is to visit HDR Labs, news site here.

    Back to top
    Was this helpful?
  • Who started all this?
    Paul Debevec and Greg Ward.

    Dr. Paul Debevec is generally credited with bringing HDRI to the forefront. He is often considered the “Father of HDRI”. However, Greg Ward, invented the Radiance file format, so he should be considered the “Grandfather of HDR” and they both are still very active in the development of current HDRI related technologies. Go Paul! Go Greg! (and Thank you!)

    Back to top
    Was this helpful?
  • What is the most commonly used HDR on the planet?
    From Paul Debevec’s website his “Uffizi Gallery-Florence” HDR is probably the single most used light probe. It was one of the first on the internet and is really great, but keep in mind that it was one of the first free HDRs on the internet, so it has been used over and over. Do you really want your work to look like everyone else’s?

    Back to top
    Was this helpful?
  • HDR files are great for Black and White conversions…
    (and so are RAW files)
    Not sure if you might want Black and White for your image? Shoot color, then you can always remove the color. But today, using 12-bit and 14-bit RAW files (or even 32-bit HDRs) you now have much more creative freedom to really tweak your B&W conversions. (Much more than just desaturating 100%) Try adjusting the red, green and blue channels while previewing the black and white image. (Side note - Adobe Photoshop Lightroom is really great for this!)

    Back to top
    Was this helpful?
  • Future Requests
    • 16-bits for all!
      • Tonemapper that removes shadows…
      • Canon – 5 and 7 frame AEB
      • Canon – Choice of 12 or 14 bits
      • GPS for all…
      • OpenSource access for apps to cameras, TMOs, etc….
      • Standardized RAW format
      • Ability to shoot/save directly in-camera into an HDR (Radiance) file format
      • (or at least the .DNG format until HDR CCDs are invented)
      • Adobe LIGHTROOM to have 32-bit, HDRI tools and utilities
      • Accurate 32-bit histograms
      • Nodal points marked on all lenses/cameras
      • Swiveling viewfinders on high-end SLRs
      • Gigapixel stitching support for all stitching applications
      • MORE options for controlling tonemapping! 40 or 50?!
      • A wireless camera controller that does not require the camera to be hardwired to a laptop.
      • All Pano stitching apps to allow off-camera nadir shots to be added!
      • Standard, built in HDR viewers for all computer platforms!

    Back to top
    Was this helpful?
  • How can I properly view my 32-bit HDR file on my 8-bit monitor?
    Unfortunately, you cannot. Viewing 12-bit, 16-bit, or 32-bit files on your normal computer screen is just a rough approximation of the data in the file. Ultimately, you will need to view your rendered images/film to see if everything it working properly and you are getting the effect/believability that you are looking for. However, there are a few things you can do to help to get to the point where what you see on screen is what you get as a final result.

    Monitor calibration is very important to normal photography and image editing anyway, but I think even more crucial when dealing with 16 or 32-bit files.
    I recommend the Spyder 3 calibrator as its relatively inexpensive, works on CRTs, LCD, and also digital projectors. There are many other calibration systems, but many are too expensive for the average consumer.

    Eventually, we will have access to the “next generation” of monitors. These are the 16-bit monitors that are coming. As a longtime digital artist, I have to say that viewing HDR images on a 16-bit monitor is like looking through a window! It is gorgeous and close to looking through a window at reality. The blacks are very, very deep and if you see a bright light (or sun) you may have to squint your eyes because it is so bright on the screen. It is truly amazing. Dolby Canada (formerly Brightside Technologies) has been working on them for a while now and have shown them at previous Siggraph conferences. Sony also has a small one called the Sony XEL-1 that has an amazing 1,000,000:1 contrast ratio. Wow!

    Speaking of which… If you are going to be working with HDR files and you’re in the market for a new monitor, then I suggest purchasing one(s) with the highest contrast ratios you can find. Typical retail stores are now starting to carry them with 3,000:1, 8,000:1, and even 10,000:1 contrast ratios! The higher the ratio, the more closely it can display your high dynamic range images.

    Dolby has licensed their “Black” technologies to various manufacturers like LG, which is how they are now able to offer monitors with 10,000:1 contrast ratios. Yeah!

    Back to top
    Was this helpful?
  • So you made it the end of this document eh!?
    Looking for more?

    Well then, here are some vocabulary words I picked up at the HDR Symposium at Stanford University.

    • FPN - Fixed Pattern Noise
      • FWC - Full Well Capacity
      • SNR - Signal to Noise Ratio
      • WDR - Wide Dynamic Range
      • SDR - Standard Dynamic Range
      • TSP - Time to Saturation Pixel
      • SAP - Synthetic Aperture Photography
      • HVS - Human Visual System
      • VGI - Veiling Glare Index
      • CLBs - Cluster Lumpy Backgrounds
      • MDS - Multi-Dimensional Scaling
      • PSF - Point Spread Function
      • Global Sigmoid TMO (Tone Mapping Operator)

    Transconductance, Thermal equilibrium, Subthreshold, Monotonically, Taxonomy, RePhotography, Multi-bucket Pixels, Metameric-matching, Noise Floor, Microlens Array,
    Tone-Corrected, Quantization, DICOM-calibrated, Rician Noise, Tone Scale Construction,
    Non-linear tonescale, Spatial Segmentation, Dichromatic Color Plane, Morphological Operator, Erosion Dilation, Contour Artifacts, Gelb Effect, De-correlated, Veiling Luminance, Deconvolution, High Fidelity Imaging, Psychophysical Glass Model, Thurstonian Scaling,
    Visibility Threshold, Spatially Varying Sigmoids, Salience Map, Spatio Temporal
    Chromatic Adaptation, & Advanced Gamut Mapping!

    Back to top
    Was this helpful?