Students studying the Graduate Diploma of Advanced 3D Production course from New Zealand's (Auckland) Media Design School have built a robot which allows a DSLR camera to take 360 degree panoramic HDRIs. In 3D packages these images are shown to effectively simulate realistic settings and lighting effects.
To watch this video, you need the latest Flash-Player and active javascript in your browser.

Automated Panoramic Image Acquisition using the LEGO NXT

The design and construction of a robotized Panoramic Head, for use with Image-Based Lighting (IBL).

Research and Development Project
of the Media Design School, New Zealand
Advanced 3D Production course - September 2008

Nagaraj Thandu, Christopher Medley-Pole, Hsulynn Pang, Mark Pearce,
Yoshihiro Harimoto, Leo Hutson, Cameron Smith.

special thanks to:
Clint Pearce, Mathew Pearce

Course Leader/Project Coordinator
Emil Polyak


First of all I'd like to thank Christian Bloch for hosting this project and for his book that was a great resource and inspiration to my students and myself.
I'd like to point out that this RnD project was not created by computer science students but artists studying advanced 3d. It's driven by the desire to understand the "light" on a greater level in order to capture it more accurately. It is very practical oriented project and the goal was to fabricate a robot while learning a lot about pixels, bit-depth, dynamic range, lenses, exposure, tone mapping and all other related subjects including the image based lighting in 3d and the linear work-flow.
We hope this project provides a good basis for further testing and development.

Emil Polyak
Course Leader, Media Design School
email | website


Currently the commercial solutions for fully integrated panorama capture are priced out of reach of the smaller 3D studio's budget. Slightly more cost effective are the commercial robotized panoramic heads.
The only other out-of-the-box solution is the manual optical axis tripod mount but this requires much more time and skill to operate, and is error prone. Our solution, made from widely available parts, can be made for a small fraction of this cost. It is highly customizable, and the camera, lens and platform independent.



The Brief.

  • To design and build a panoramic image acquisition system utilizing a laptop computer, a DSLR camera, a tripod and some DIY hardware.
  • To design a computationally efficient pipeline for stitching, HDR compiling and finally utilizing this high dynamic range data for image-based lighting, reflections and backdrops.


Panoramic photography is a popular pursuit even for those ignorant of IBL and even CGI in general, so many home-brew tutorials are available for inspiration. While few if any are directly suitable for our purposes. We took elements from several designs to incorporate in our final maya specification.
Many designs were ruled out by the following issues:
  • Horizontal panorama only
  • Compact camera only
  • No HDR Support


After some basic research was conducted it was decided that the panoramic head needs these basic features:
  • Pivots around the optical axis
  • Is structurally sound
  • Rotates precisely
  • Is adaptable to different lenses, settings, and cameras
  • Is lightweight and portable
The optical axis of the lens is critical factor when considering the function of the head as it is the location the lens should ideally pivot on, lest it cause "parallax"; the change in angular position of objects, especially foreground ones, caused by a translation of the viewer.
The structure of the head is complicated by the fact the optical axis is located in the barrel of the lens and the tripod mount at the base of the camera body, hence the differing design of the panoramic and the standard tripod head.
Highslide JS
Previz & Blueprint as 3d Maya model.

The head also needs to be constructed in a manner that it will not warp as the weight of the camera/lens shifts; the optical axis is positioned well away from cameras the center of gravity.
Having researched what was required for our needs and investigated some other designs.

Maya Mock-Up

Rough plans were sketched out and then modelled in Autodesk Maya, where the rough concept was elaborated on. Specific sections of metal, types of fixatives, grades of bearings, gears and other components were readily available so these were researched and created to scale in Maya units. This plan could be revised easily and orthographic plans were printed and taken to the workshop, as visual communication of the basic idea as well as a guide to take measurements from.


Highslide JS
Camera and motors mounted.
There are two conventional structures for the head: U-shaped and L-shaped. They each have there own advantages; the U-shaped being stronger, more stable and easier to align. The L being simpler, lighter and requiring less materials to construct.
Following that, suitable materials for the structure need to be decided upon. Aluminium is an obvious choice, because it is more readily available, and easier to work than the exotic composites like carbon fiber that are also often used in the construction of tripods. Steel is another option but its high rigidity transfers environmental vibrations to the camera more easily and, although it has a high tensile strength it has a lower strength to weight ratio. Fiber-glass could be used, but exposure to sunlight can eventually cause the resins used to become brittle, and it is difficult to mount the various parts in.
Because of the choice of materials a machine shop was needed to lathe and mill certain parts like the bearing housings, and worm drive spindle.

Unable to Launch Flash Player

This message is being displayed because the browser was unable to load the Flash Player required to display this content.

There are several possible causes for this;

  1. Your current Flash Player is outdated or it is not installed on your system. Download the latest Flash Player.
  2. Your browser does not have JavaScript enabled, this is required to load the Flash content.
  3. The Theme file used to generate this site may be missing the required JavaScript to launch the Flash player.



The LEGO NXT was chosen over other micro-controllers mainly because of ease of use, in terms of software it requires only basic knowledge of programming concepts due to its graphical node based paradigm (based on the LABVIEW language), but its flexibility can be expanded via a firmware upgrade that allows the use of robotC, a C based language. It also provides numerous benefits in terms of hardware as well, it provides a speed controller, a power source, and servos with rotational feed back, which is important in terms of accuracy.


A C# program was created to provide output to the LEGO NXT from the laptop, it sends rotations in the form of bytecode to the NXT’s mailbox; which in turn positions the motors accordingly.


Tcl scripting was used to get camera settings from the user namely; Film Back, Focal Length, Aperture, Exposure time, via GUI and calculate the AOV, FOV, then send these as command line arguments when running the macro.


Due to the lack of an API in the Pentax Remote Assistant software, we were forced to create a macro. The macro takes an output from the Tcl Script, in the form of: rotation angles X&Y, number of rotations X&Y and exposure time. With these variables the macro launches Remote Assistant and Sender.exe, hits capture in remote assistant, ALT-TAB’s to Sender.exe enters some values, hits send and ALT-TAB’s back to Remote Assistant. it repeats this process by looping in X both Y for the number of rotations X and Y respectively.




The camera is screwed onto the slider at the optical axis as determined by the focal length. The Laptop and the LEGO NXT are booted up, and the pano program loaded on the NXT. The Pentax remote software is opened (all other windows should be closed), and camera settings are entered. The user launches the compiled Tcl script (an executable), enters some values into the corresponding GUI and clicks Run; the software does the rest, the whole panorama should take around ten minutes depending on what exposures are needed.


As the panorama takes time to capture it is impossible to photograph any moving objects without getting ghosting between frames, or in some cases not being able to stitch them. This is a problem because many natural phenomena that would be desirable to capture, such as water, clouds or foliage affected by wind will be prone to these effects. Another related problem, is the change of light between frames, this could be caused by clouds moving to shadow the sun, lights nearby being turned off or on, or even slower light changes such as sunset or sunrise.


The captured images are stitched in Autopano Pro software, where a minimal amount of user interaction is required to ensure correct position of each "patch". This is especially important in areas such as a cloudless blue sky or a uniformly lit wall, where Autopano may not have enough data to compute alignments.


The RAW (.DNG) files taken by the camera have a limited dynamic range, but the multiple exposures captured by auto-bracketing capture the whole light range by taking 2 EV steps either side of the metered nominal EV. These files are then merged in HDR Shop into a 32 bit output.


Chrome Sphere.

The chrome sphere technique is fairly effective in capturing the lighting of a scene, but it has severe stretching in its peripheries, rendering it unsuitable for use in detailed reflections.

Manual Panorama.

Panoramas can be taken manually with a standard tripod, or optionally with a specialist optical axis one where the camera is offset backwards so it pivots correctly. The issue with manual creation is that it is a fairly laborious exercise where one must, depending on focal length and tripod equipment, carefully rotate the camera to a given angle, take multiple exposures at that point and rinse and repeat, for the number of images needed. The risk of error should not be overlooked also, as a single mistake might render an image set useless.

Commercial Solutions.

Professional solutions like the SpheroCam HDR are prohibitively expensive, for all but the largest studios. They do boast impressive features, like full hdr images in one capture and spherical 360º x 180º images, without any need for stitching. One feature that is a bit of a red herring though, is resolution, as our system captures multiple images, and these can be captured at high resolution, the resulting stitched image has a higher resolution than would be needed for any CGI application.

You'll find a very detailed hands-on comparison in Paschá Kuliev's Bachelor Thesis, available in the Additional Downloads section. He basically comes to the same conclusion, hinting at a panoramic robot like this as the optimal solution.


The "Bath and Forck" team's rendering results of using captured lightprobes in Maya:

Unable to Launch Flash Player

This message is being displayed because the browser was unable to load the Flash Player required to display this content.

There are several possible causes for this;

  1. Your current Flash Player is outdated or it is not installed on your system. Download the latest Flash Player.
  2. Your browser does not have JavaScript enabled, this is required to load the Flash content.
  3. The Theme file used to generate this site may be missing the required JavaScript to launch the Flash player.


It would be fair to say that our prototype was successful, it provides a good basis for further testing and development. You're invited to take this project further with the package below. Please report back with any improvements you manage to make, either by email or in the forum.

This project with all images and support material is the intellectual property of the Media Design School and may be shared only if left unaltered and in its entirety with credit given and only for personal and non-commercial use.

Copyright 2008 Media Design School, Auckland, New Zealand
242 Queen St, Auckland, NZ +64 9 3030 402