Monday, October 29, 2007

In-camera effects (Special effects)

An in-camera effect is any special effect in a video or movie that is created solely by using techniques in and on the camera and/or its parts. The in camera effect is defined by the fact that the effect exists on the original camera negative or video recording before it is sent to a lab or modified. So effects that modify the original negative at the lab, such as skip bleach or flashing, are not included. Likewise effects that work with props, such as squibs, fire, and dustball guns are also not included. Some examples of in camera effects include:
Matte painting
Matte paintings are used to create "virtual sets" and "digital backlots". They can be used to create entire new sets, or to extend portions of an existing set. Traditional matte painting is done optically, by painting on top of a piece of glass to be composited with the original footage. However this trick is as old, as Film itself (circa 1911).
Originally, matte paintings were painted on glass plates, earlier also named glasshot. Two plates of glass are set up parallel to each other at a certain distance. The camera is set up in front of them. On the rear plate, there is a background landscape which is relatively rough, for example a painted Jungle. On the foreground plate, detail-rich elements are painted such as small plants, stones etc. Between the glass plates, one could then encourage a puppet in stop-motion. Nowadays, matte painting is done in computers with the use of a tablet as a drawing device. In a digital environment, matte paintings can also be done in a 3-D environment, allowing for 3-D camera movements.The first film to use a digital matte painting was Die Hard 2: Die Harder. It was used during the last scene, which took place on an airport runway.
Schüfftan process
The Schüfftan process is a movie special effect named after its inventor, Eugen Schüfftan (1893–1977). It was widely used in the first half of the 20th century before it was replaced by the travelling matte and bluescreen effects.
The process was designed by the German cinematographer Eugen Schüfftan while working on the movie Metropolis (1927). The movie's director, Fritz Lang, wanted to insert the actors into miniatures of skyscrapers and other buildings, so Schüfftan used a specially-made mirror to create the illusion of actors interacting with huge, realistic-looking sets. Schüfftan placed a plate of glass at a forty-five-degree angle between the camera and the miniature buildings. He used the camera's viewfinder to trace an outline of the area into which the actors would later be inserted onto the glass. This outline was transferred onto a mirror and all the reflective surface that fell outside the outline was removed, leaving transparent glass. When the mirror was placed in the same position as the original plate of glass, the reflective part blocked a portion of the miniature building behind it and also reflected the stage behind the camera. The actors were placed several metres away from the mirror so that when they were reflected in the mirror, they would appear at the right size. In the same movie, Schüfftan used a variation of this process so that the miniature set (or drawing) was shown on the reflective part of the mirror and the actors were filmed through the transparent part, as shown in the illustration.
Over the following years, the Schüfftan process was used by many other film-makers, including Alfred Hitchcock, in his film Blackmail (1929), and as recently as The Return of the King. The Schüfftan process was later replaced by matte shots, which were easier and more efficient to create. The Schüfftan process's use of mirrors is very similar to the 19th century stage technique known as Pepper's ghost.
Forced perspective
Forced perspective is a technique that employs optical illusion to make an object appear farther, closer, larger or smaller than it actually is. It is used primarily in photography, filmmaking and architecture. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera.
Forced perspective in filmmaking
Examples of forced perspective:
A scene in an action/adventure movie in which dinosaurs are threatening the heroes. By placing a miniature model of a dinosaur close to the camera, the dinosaur may look monstrously tall to the viewer, even though it is just closer to the camera.
A scene in which two characters are supposed to be interacting in the foreground of a vast cathedral. Instead of actually filming in a cathedral, the director mounts a large painting of a cathedral's interior in a studio and films the actors talking in front of the painting. This gives the effect on film that the characters are in the foreground of a large room, when in reality they are standing next to a flat surface.
Movies (especially B-movies) in the 1950s and 1960s produced on limited budgets sometimes feature forced perspective shots which are completed without the proper knowledge of the physics of light used in cinematography, so foreground models can appear blurred or incorrectly exposed.
Forced perspective can be made more believable when environmental conditions obscure the difference in perspective. For example, the final scene of the famous movie Casablanca takes place at an airport in the middle of a storm, although the entire scene was shot in a studio. This was accomplished by using a painted backdrop of an aircraft, which was "serviced" by midgets standing next to the backdrop. A downpour (created in-studio) draws much of the viewer's attention away from the backdrop and extras, making the simulated perspective less noticeable.
The example below, taken from Andrei Tarkovsky's Nostalghia, is one notable instance.The shot begins at left, a closeup of a man and his dog, with the small house in the distance. A continuous slow pullback ends at right, revealing the man, dog, and the entire farmhouse setting to be enclosed in the San Galgano church nave. The shot was accomplished by building the farmhouse setting in miniature, and placing it closely behind the man and dog, shooting with lenses chosen to make the house appear distant at first.
Role of light
Early instances of forced perspective used in low-budget motion pictures showed objects that were clearly different from their surroundings: often blurred or at a different light level. The principal cause of this was geometric. Light from a point source travels in a spherical wave, decreasing in intensity as the inverse square of the distance travelled. This means that a light source must be four times as bright to produce the same illuminance at an object twice as far away. Thus to create the illusion of a distant object being at the same distance as a near object and scaled accordingly, much more light is required.
Opening the camera's iris lets more light into the camera, allowing both near and far objects to be seen at a more similar light level, but this has the secondary effect of decreasing depth of field. This makes either the near or the far objects appear blurry. By increasing the volume of light hitting the distant objects, the iris opening can be restricted and depth of field is increased, thus portraying both near and far objects as in focus, and if well scaled, existing in a similar lateral plane.Since miniature models would need to be subjected to far greater lighting than the main focus of the camera, the area of action, it is important to ensure that these can withstand the significant amount of heat generated by the incandescent light sources typically used in film and TV production, as they may be prone to combustion.
Nodal point: forced perspective in motion
Peter Jackson's film adaptations of The Lord of the Rings employ an almost constant forced perspective. Characters apparently standing next to each other would be displaced by several feet in depth from the camera. This, in a still shot, makes some characters appear unnaturally small in relation to others. A new technique developed for The Fellowship of the Ring was an enhancement of this principle which could be used in moving shots. Portions of sets were mounted on movable platforms which would move precisely according to the movement of the camera, so that the optical illusion would be preserved at all times for the duration of the shot. The same techniques were used in the Harry Potter movies to make the character Hagrid look like a giant. Note that props around Harry and his friends are of normal size, while seemingly identical props placed around Hagrid are in fact smaller. The techniques developed centred around a nodal point axis, so the camera's panning axis was at the point between the lens and aperture ring where the light travelling through the camera met its axis. By comparison, the normal panning axis would be at the point at which light would strike the film (or CCD in a TV camera). Peter Jackson enhanced this known effect by adding moving jigs to extend the pan to be effective outside the camera during motion, which is not possible to show in a still photograph. The position of this nodal point can be different for every lens. However, on wide angle lenses it is often found between the midpoint of the lens and the aperture ring.
Digital effects
Another method is to film the actions of the "smaller" character on a set with normal-sized props, film the matching actions of the "large" character on an identical but smaller set, then combine the footage digitally. This is the most straightforward modern technique, and is most likely to be used with bluescreen filming in TV production due to its lower cost and quality requirements.
Comedic effects
As with many film genre and effects, forced perspective can be used to visual comedy effect. Typically, an object or character is portrayed in a scene, its size defined by its surroundings. A character then interacts with the object or character, in the process showing that the viewer has been fooled and there is forced perspective in use.
The 1930 Laurel and Hardy movie, Brats, used forced perspective to depict Stan and Ollie simultaneously as adults and as their own sons.
An example used for comic effect can be found in the slapstick comedy Top Secret! in a scene which appears to begin as a close up of a ringing phone with the characters in the distance. However when the character walks up to the phone (towards the camera) and picks it up it becomes apparent that the phone is extremely oversized instead of close to the perspective of the camera. Another scene in the same movie begins with a closeup of a wristwatch. The next cut shows that the character actually has a gargantuan wristwatch.
The same technique is also used in the Dennis Waterman sketch in the British BBC sketch show Little Britain. In the television version, oversized props are used to make the caricatured Waterman look just three feet tall (or even smaller in some cases, such as a series two episode which he is in a design of a set, which is a shoebox and he is the equivalent size to the objects). In real life, Waterman is of average height. In the Channel 4 comedy Father Ted, the idea of forced perspective causes confusion. Father Ted attempts to explain to Father Dougal that the small plastic cows he is holding look larger than the real cows Dougal can see in the field because the real cows are 'far away'. Father Ted is unsuccessful as Father Dougal is unable to understand the concept of perspective. Perhaps one of the most famous usages of forced perspective involves the Leaning Tower of Pisa where one person will stand in the foreground with the tower in the background and appear to be holding it up.One of the recurring Kids in the Hall sketches featured Mr. Tyzik, "The Headcrusher", who used forced perspective (from his own point of view) to "crush" other people's heads between his fingers.

Forced perspective in architecture
New York-New York Hotel & Casino, Las Vegas. A reduced replica of the Chrysler Building in Manhattan employs forced perspective to appear higher from below. Elements of the skyscraper are constantly decreased in scale upwards.
In architecture, a structure can be made to seem larger, taller, farther away or otherwise by adjusting the scale of objects in relation to the spectator, increasing or decreasing perceived depth.
For example, when forced perspective is used to make an object appear farther away, the following method can be used: By constantly decreasing the scale of objects from expectancy and convention toward the farthest point from the spectator, an illusion is created that the scale of said objects is decreasing due to their distant location.The Statue of Liberty is built with a slight forced perspective so that it appears more correctly proportioned when viewed from its base. When the statue was designed in the late 1800s (before easy air flight), there were few other angles to view the statue from. This became an issue for special effects technicians working on the movie Ghostbusters II, who had to back off on the amount of forced perspective used when replicating the statue for the movie so that their model (which was photographed head-on) would not look top-heavy.Another instance of disproportionate figures is in Michelangelo's David. The sculpture has oversized hands and feet because it was meant to be up high and look proportional when viewed from below.Forced perspective is extensively employed at theme parks and other such (postmodern) architecture such as found in Las Vegas, often to make structures seem larger than they are in reality where physically larger structures would not be feasible or desirable or to provide an optical illusion for entertainment value.
Dolly zoom
The dolly zoom is an unsettling in-camera special effect that appears to undermine normal visual perception in film.The effect is achieved by using the setting of a zoom lens to adjust the angle of view (often referred to as field of view) while the camera dollies (or moves) towards or away from the subject in such a way as to keep the subject the same size in the frame throughout. In its classic form, the camera is pulled away from a subject whilst the lens zooms in, or vice-versa. Thus, during the zoom, there is a continuous perspective distortion, the most directly noticeable feature being that the background appears to change size relative to the subject. As the human visual system uses both size and perspective cues to judge the relative sizes of objects, seeing a perspective change without a size change is a highly unsettling effect, and the emotional impact of this effect is greater than the description above can suggest. The visual appearance for the viewer is that either the background suddenly grows in size and detail overwhelming the foreground; or the foreground becomes immense and dominates its previous setting, depending on which way the dolly zoom is executed.The effect was first developed by Irmin Roberts, a Paramount second-unit cameraman, and was famously used by Alfred Hitchcock in his film Vertigo.
Meaning of the effect
The dolly zoom is commonly used by filmmakers to represent the sensation of vertigo, a "falling away from oneself feeling", feeling of unreality, or to suggest that a character is undergoing a realization that causes him to reassess everything he had previously believed. After Hitchcock popularized the effect (he used it again for a climactic revelation in Marnie), the technique was used by many other filmmakers, and eventually became regarded as a gimmick or cliché. This was especially true after director Steven Spielberg repopularized the effect in his highly regarded film Jaws, in a memorable shot of a dolly zoom into Police Chief Brody's (Roy Scheider) stunned reaction at the climax of a shark attack on a beach (after a suspenseful build-up). Spielberg used the technique again in E.T. the Extra-Terrestrial and Indiana Jones and the Last Crusade.
Lens flare
Lens flare is the light scattered in lens systems through generally unwanted image formation mechanisms, such as internal reflection and scattering from material inhomogeneities in the lens. These mechanisms differ from the intended image formation mechanism that depends on refraction of the image rays. For good optical systems and most images, flare is a secondary effect that is widely distributed across the image and thus not visible. But when an image includes a very bright light source, flare generated by a bright image region can have enough intensity to become very visible. The light produced by flare mechanisms superimposes broadly across the image, adding light to dark image regions and reducing image contrast.
Lenses with large numbers of elements such as zooms tend to exhibit greater lens flare, as they contain multiple surfaces at which unwanted internal scattering occurs. The spatial distribution of the lens flare typically manifests as several starbursts, rings, or circles in a row across the image or view. Lens flare patterns typically spread widely across the scene and change location with the camera's movement relative to light sources, tracking with the light position and fading as the camera points away from the bright light until it causes no flare at all. The specific spatial distribution of the flare depends on the shape of the aperture of the image formation elements. For example, if the lens has a 6-bladed aperture, the flare may have a hexagonal pattern. Such internal scattering is also present in the human eye and manifests in an unwanted veiling glare that is apparent when viewing very bright lights or highly reflective (e.g. specular) surfaces.When a bright light source is shining on the lens but not in its field of view, lens flare appears as a haze that washes out the image and reduces contrast. This can be avoided by shading the lens (the purpose for which lens hoods are designed). In a studio, a gobo or set of barn doors can be attached to the lighting to keep it from shining on the camera. Modern lenses use lens coatings to reduce the amount reflection and minimize flare.
Deliberate use
A lens flare is often deliberately used to invoke a sense of drama. A lens flare is also useful when added to an artificial or modified image composition because it adds a sense of realism, implying that the image is an un-edited original photograph of a "real life" scene.For both of these reasons (implying realism and/or drama) artificial lens flare is a common effect in various graphics editing programs, although its use can be a point of contention among professional graphic designers. Lens flare was one of the first special effects developed for computer graphics because it is the result of relatively simple optical principles. During the mid- to late-1990s, it was a popular graphical effect for computer and video games, and is now accompanied by other more complex atmospheric effects that add a greater sense of realism.
Diffraction artifact in digital cameras
Diffraction artifact on a digital picture. The sun is just outside the frame.One form of flare is specific to digital cameras. With the sun shining on an unprotected lens, a group of small rainbows appear. This artifact is formed by internal diffraction on the image sensor, which acts like a diffraction grating. Unlike true lens flare, this artifact is not visible in the eyepiece of a digital SLR camera, making it more difficult to avoid.
Photographic filter
In photography, a filter is a camera accessory consisting of an optical filter that can be inserted in the optical path. The filter can be a square or rectangle shape mounted in a holder accessory, or, more commonly, a glass or plastic disk with a metal or plastic ring frame, which can be screwed in front of the lens.
Filters allow added control for the photographer of the images being produced. Sometimes they are used to make only subtle changes to images; other times the image would simply not be possible without them.The negative aspects of using filters, though often negligible, include the possibility of loss of image definition if using dirty or scratched filters, and increased exposure required by the reduction in light transmitted. The former is best avoided by careful use and maintenance of filters, while the latter is a matter of technique; it usually will not be a problem if planned out properly, but in some situations does make filter use impractical.
Many filters are identified by their Wratten number.
Uses of filters in photography
Filters in photography can be classified according to their use:
Clear and ultraviolet
Color correction, also called "color conversion" or "white balance correction"
Color separation, also called Color Subtraction
Contrast enhancement
Infrared
Neutral Density, including the Graduated ND filter and Solar filter
Polarizing
Special Effects of various kinds, including
Graduated color, called color grads
Cross screen and Star diffractors
Diffusion and contrast reduction
Sepia tone
Spot
Close-up or macro diopters, and split diopters or split focus
Clear and ultraviolet
Clear filters, also known as window glass filters or optical flats, are completely transparent, and (ideally) perform no filtering of incoming light at all. The only use of a clear filter is to protect the front of a lens.
UV filters are used to reduce haziness created by ultraviolet light. A UV filter is mostly transparent to visible light, and can be left on the lens for nearly all shots. UV filters are often used for lens protection, much like clear filters. A strong UV filter, such as a Haze-2A or UV17, cuts off some visible light in the violet part of the spectrum, and so has a pale yellow color; these strong filters are more effective at cutting haze, and can reduce purple fringing in digital cameras.Strong UV filters are also sometimes used for warming color photos taken in shade with daylight-type film.
While in certain cases (such as harsh environments) a protection filter may be necessary, there are also downsides to this practice. Arguments for and against use of protection filters incude:
For:
If the lens is dropped, the filter may well suffer scratches or breakage instead of the front lens element.

One can clean the filter frequently without having to worry about damaging the lens coatings; a filter scratched by cleaning is much less expensive to replace than a lens.
Against:
Adding another element degrades image quality due to aberration and flare.
It may reduce the use of lens hoods, since threading a lens hood on top of the clear filter might cause vignetting on some lenses, and since not all clear filters would even have threads allowing a hood to be attached.
Additionally, users of UV filters must be careful about the quality of such filters. There is a wide variance in the performance of these filters with respect to their ability to block UV light. Also in lower quality filters, problems with autofocus and image degradation have been noted.
Color correction
A major use is to compensate the effects of lighting not balanced for the film stock's rated color temperature (usually 3200 K for professional tungsten lights and 5500 K for daylight): e.g., the 80A blue filter used with daylight film corrects the orange/reddish cast of household tungsten lighting, while the 85B used with tungsten film will correct the bluish cast of daylight. Color correction filters are identified by numbers which sometimes vary from manufacturer to manufacturer. The use of these filters has been greatly reduced by the widespread adoption of digital photography, since color balance problems are now often addressed with software after the image is captured. Although the 80A filter is mainly used to correct for the excessive redness of tungsten lighting, it can also be used to oversaturate scenes that already have blue. The photo on the left was shot with a polarizer, while the one on the right was shot with a polarizer and an 80A filter.
Color subtraction
Color subtraction filters work by absorbing certain colors of light, letting the remaining colors through. They can be used to demonstrate the primary colors that make up an image. They are perhaps most frequently used in the printing industry for color separations, and again, use has diminished as digital solutions have proliferated.
Contrast enhancement
Filters are commonly used in black and white photography to manipulate contrast. For example a yellow filter will enhance the contrast between clouds and sky by darkening the latter. Orange and red filters will have a stronger effect. A deep green filter will darken the sky too but will lighten green foliage and will make it stand out against the sky. Also see diffusion filters, which are used to reduce contrast.
Polarizer
A polarizing filter, used both in color and black and white photography, can be used to darken overly light skies. Because the clouds are relatively unchanged, the contrast between the clouds and the sky is increased. Atmospheric haze and reflected sunlight are also reduced, and in color photographs overall color saturation is increased. Polarizers are often used to deal with situations involving reflections, such as those involving water or glass, including pictures taken through glass windows (this uses the phenomenon of Brewster's angle) .
Polarizers are the type of filter whose use is least affected by digital photography; while effects that may visually resemble the results of a polarizing filter can be simulated with software post-processing, many of the optical properties of polarization control at the time of capture simply cannot be replicated, particularly those involving reflections. There are two types of polarizing filters. A linear polarizer filter transmits one of two states of linearly polarized light. A circular polarizer (sometimes called a CPL filter) similarly selects a linear state but then converts it to circularly polarized light, by adding a birefringent layer (typically a quarter-wave plate) to the filter after the linear polarizer. The metering and auto-focus sensors in certain cameras, including virtually all SLRs, will not work properly with linear polarizers, both because of the mirror and because of the beam-splitters used to split off the light for focusing and metering. Circular polarizers will work with all types of cameras.
Neutral Density
A Neutral Density (ND) filter creates a reduction in light that is neutral and equal for the film or sensor area. This filter is often used to allow for longer exposure times whenever a longer exposure would normally create over exposure in the camera.
A Graduated Neutral Density (GND) filter is a neutral density filter that varies the effect with a gradient so it can be used to compress dynamic range across the entire scene. This can be beneficial when the difference between highlights and shadows of a scene are too great to allow for proper exposure for both.
Cross screen
A cross screen filter, also known as a star filter, creates a star pattern, in which lines radiate outward from bright objects. The star pattern is generated by a very fine diffraction grating embedded in the filter, or sometimes by the use of prisms in the filter. The number of stars varies by the construction of the filter, as does the number of points each star has.
Diffusion
A diffusion filter (also called a softening filter) softens subjects and generates a dreamy haze . This is most often used for portraits. However, this also has the effect of reducing contrast, and the filters are designed, labeled, sold, and used for that purpose too. There are many ways of accomplishing this effect, and thus filters from different manufacturers vary significantly. The two primary approaches are to use some form of grid or netting in the filter, or to use something which is transparent but not optically sharp.
Both effects can be achieved in software, which can provide a very precise degree of control of the level of effect, however the "look" may be noticeably different. Additionally, if there is too much contrast in a scene, the dynamic range of the digital image sensor or film may be exceeded, which post-processing cannot compensate for, so contrast reduction at the time of image capture may be called for.
Transparent diffusion
Zeiss manufactures a widely noted Softar diffusion filter which is made of many tiny globs of acrylic deposited on one surface which act as microlenses to diffuse the light. In some versions the globs are on the inside of the filter (facing the photographer) while on others they face outwards (towards the subject). In various versions the globs vary in number and diameter, from approximately 97 to 150 globs each 1 mm to 3 mm wide.
Homebrew approaches to transparent diffusion filters are generally based on modifying a clear or UV filter by placing various materials on it; the most popular choices are petroleum jelly, optical cement, and nail polish. Transparent filters are more commonly used for the "dreamy" or "misty" effect than for contrast reduction.
Grid or Netting
Various widths, colors (often black or white), and grid shapes (typically diamonds or squares) and spacings of netting, usually made from nylon, are used to provide diffusion effects. These are used both for the "dreamy" look and for contrast reduction. The homebrew approach to this sort of effect is generally to stretch a piece of pantyhose material in front of the lens.
Diopters and split diopters
Some argument could be made as to whether these are technically filters at all, or actual accessory lenses, however they are sold by filter manufacturers as part of their product lines, using the same holders and attachment systems. Diopters are simple single or two-element lenses used to assist in close-up and macro photography. They provide some number of positive optical diopters, which magnify the subject and allow objects very close to the lens to be brought into focus. They are sometimes sold singly, and sometimes sold in kits of +1, +2, and +4 diopters, which allows them to be combined to produce a range from +1 to +7.
A split diopter is a diopter in which only half of the camera's lens area is covered by the filter. A round split diopter has a usual filter ring, but is filled with only a semicircle of glass (or plastic). This allows the photographer to photograph an object which is very close against a background much further away, effectively extending depth of field. Careful composition is required to make effective use of this device.
Materials and construction
Photo filters are commonly made from glass, resin plastics similar to those used for eyeglasses (such as CR39), polyester and polycarbonate; sometimes acetate is used. Historically, filters were often made from gelatin, and color gels, also called gelatin or simply gel filters are still used, but these are no longer actually made from gelatin, generally being instead made from one of the plastics mentioned above. Sometimes a color is blended throughout the filter material, in other cases the filter is a sandwich composed of a thin sheet of material surrounded and supported by two pieces of clear glass or plastic. Certain kinds of filters use other materials inside a glass sandwich; for example, polarizers often use various special films, netting filters have nylon netting, and so forth. The rings on screw-on filters are most often made of aluminum, though in more expensive filters brass is used. Aluminum filter rings are much lighter in weight, but can "bind" to the aluminum lens threads they are screwed in to, requiring the use of a filter wrench to get the filter off of the lens. Aluminum also dents or deforms more easily.
High quality filters have multiple layers of optical coating to reduce reflections and to allow more light to pass through the filter. Uncoated filters can block up to 9% of the light, while multi coated filters can allow for up to 99.7% of the light to pass through. Manufacturers brand their high-end multi coated filters with different labels, for example:
Hoya: HMC (Hoya Multi Coating)
B+W: MRC (Multi Resistant Coating)

Reflections can lead to flare and reduced contrast. Multi-layer coatings, which reduce this effect, are highly desirable in any filter. Exceptions to this rule are infrared and ultraviolet photography, where uncoated filters are usually used; multi-coated filters have a tendency to reflect more wavelengths outside the visible spectrum, making them unsuitable for such purposes.
Filter sizes and mountings
Manufacturers of lenses and filters have "standardized" on several different sets of sizes over the years.
Threaded round filters
The most common standard filter sizes for circular filters include 30 mm, 37 mm, 40.5 mm, 43 mm, 46 mm, 49 mm, 52 mm, 55 mm, 58 mm, 62 mm, 67 mm, 72 mm, 77 mm, 82 mm, 86 mm, 95 mm, 112 mm and 127 mm. Other filter sizes within this range may be hard to find since the filter size may be non-standard or may be rarely used on camera lenses. The specified diameter of the filter in millimeters indicates the diameter of the male threads on the filter housing. The thread pitch is 0.5 mm, 0.75 mm or 1.0 mm, depending on the ring size.
Filter diameter for a particular lens is commonly identified on the lens face by the ligature "ø". For example, a lens marking may indicate "ø 55mm."
Square filters
For square filters, 2" x 2", 3" x 3" and 4" x 4" were historically very common and are still made by some manufacturers. 100 mm x 100 mm is very close to 4"x4", allowing use of many of the same holders, and is one of the more popular sizes currently (2006) in use; it is virtually a standard in the motion picture industry. 75 mm x 75 mm is very close to 3" x 3" and while less common today, was much in vogue in the 1990s.
A French manufacturer called Cokin makes a wide range of filters and holders in three sizes which is collectively known as the Cokin System. "A" (amateur) size is 67 mm wide, "P" (professional) size is 84 mm wide, and "X Pro" is 130 mm wide. Many other manufacturers make filters to fit cokin holders. Cokin also makes a filter holder for 100 mm filters, which they call the "Z" size. Most of Cokin's filters are made of optical resins such as CR-39. A few round filter elements may be attached to the square/rectangular filter holders, usually polarizers and gradient filters which both need to be rotated and are more expensive to manufacture.
Cokin formerly (1980s through mid-1990s) had competition from Hoya's Hoyarex system (75 mm x 75 mm filters mostly made from resin) and also a range made by Ambico,but both have withdrawn from the market. A small "system" range is still made (as of 2005) by Hitech. In general, square (and sometimes rectangular) filters from one system could be used in another system's holders if the size was correct, but each made a different system of filter holder which could not be used together. Lee, Tiffen and Singh Ray also make square / rectangular filters in the 100 x 100 and Cokin "P" sizes. Gel filters are very common in square form, rarely being used in circular form. These are thin flexible sheets of plastic which must be held in rigid frames to prevent them from sagging. Gels are made not only for use as photo filters, but also in a wide range of colors for use in lighting applications, particularly for theatrical lighting. Gel holders are available from all of the square "system" makers, but are additionally provided by many camera manufacturers, by manufacturers of gel filters, and by makers of expensive professional camera accessories (particularly those manufacturers which target the movie and television camera markets. Square filter systems often have lens shades available to attach to the filter holders.
Rectangular filters
Graduated filters of a given width (100 mm, 67 mm, 84 mm, etc.) are often made rectangular, rather than square, in order to allow the position of the gradation to be moved up or down in the picture. This allows, for example, the red part of a sunset filter to be placed at the horizon. These are used with the "system" holders described above.
Bayonet round filters
Certain manufacturers, most notably Rollei and Hasselblad, have created their own systems of bayonet mount for filters. Each design comes in several sizes, such as Bay I through Bay VI for Rollei, and Bay 50 through Bay 93 for Hasselblad.
Series filters
From the 1930s through the late 1970s, filters were also made in a sizing system knows as a series mount. The filters themselves were round pieces of glass (or occasionally other materials) with no threads or rings attached. Instead, the filter was placed between two rings; the mount ring either screwed into the lens threads or was slipped over the lens barrel and the retaining ring screws into the mounting ring to hold the filter in place. The series designations are generally written as Roman numerals, I through IX, with the interesting exception of the series 4.5 filter. Retaining Ring sizes include:
Series VII - 54.346 mm diameter, 36 tpi thread pitch
Series filter number to mm conversion:IV = 20.6 mm4.5 = 25.5 mmV = 30.2 mm5.5 = 35.9 mmVI = 41.3 mmVII = 50.8 mm7.5L = 57mmVIII = 63.5mm8.5/5.5L = 74.8 x 5.6mm8.5/8mm = 74.8 x 8mmIX = 82.6 mm93 = 93mm103 = 103mm107 = 107 mm119 = 119mm125 = 125 mm138 = 138 mm
Stuck filter removal
Filter rings are generally made from either aluminum or brass. Lens barrels, particularly the threads to which filters attach, are usually made from aluminum. Filter rings, particularly aluminum ones, can sometimes "bind" to the aluminum lens threads and be difficult to remove. Aluminum is a relatively soft metal; attempting to remove a stuck filter by squeezing with the hand generally puts a lot of inward pressure on just the two areas being gripped; this can bend and deform both the filter ring and the lens threads, permanently weakening or damaging both and making the filter even more difficult to remove. Methods should be employed that apply pressure evenly around the filter ring. Typically this is achieved either by use of a filter wrench or by cupping the filter ring and front of the lens with a piece of fabric to protect them and provide friction, then pressing the combination against a hard surface and twisting the lens barrel. Other aids to stuck filter removal include using either a tightened rubber band or shoelace around the rim of the filter to improve grip.
Shutter (photography)
In photography, a shutter is a device that allows light to pass for a determined period of time, for the purpose of exposing photographic film or a light-sensitive electronic sensor to light to capture a permanent image of a scene. A shutter can also be used to allow pulses of light to pass outwards, as in a movie projector or signal lamp.
Camera shutters
Camera shutters can be fitted in two positions:
Central shutters are mounted within a lens assembly, or more rarely behind or even in front of a lens and shut off the beam of light where it is narrow. A leaf mechanism is usually used.
Focal plane shutters are mounted near the focal plane and move to uncover the film or sensor.
Shutters immediately behind the lens were used in some cameras with limited lens interchangeability. Shutters in front of the lens were used in the early days of photography.
Focal-plane shutters are usually implemented as a pair of cloth, metal, or plastic curtains which shield the film from light. For shutter speeds slower than a certain point (known as the X-sync speed of the shutter),which depends on the camera, one curtain of the shutter opens, and the other closes after the correct exposure time. At shutter speeds faster than the X-sync speed, the top curtain of the shutter travels across the focal plane, with the second curtain following behind, so that each section of the film or sensor is exposed for the correct amount of time. The effective exposure time can be much shorter than for central shutters.
Focal plane shutters have the advantages of enabling much shorter exposures, and allowing the use of interchangeable lenses without requiring the expense of a separate shutter for each lens. They have the disadvantage of distorting the images of fast-moving objects: although no part of the film is exposed for longer than the time set on the dial, one edge of the film is exposed an appreciable time after the other, so that a horizontally moving shutter will, for example, elongate or shorten the image of a car speeding in the same or the opposite direction to the shutter movement. Other mechanisms than the dilating aperture and the sliding curtains have been used; anything which exposes the film to light for a specified time will suffice.
The time for which a shutter remains open, the exposure time, is determined by a timing mechanism. These were originally mechanical, but since the late twentieth century are mostly electronic.The exposure time and the effective aperture of the lens must together be such as to allow the right amount to reach the film or sensor. Additionally, the exposure time must be suitable to handle any motion of the subject. Usually it must be fast enough to "freeze" rapid motion; sometimes a controlled degree of blur is desired, to give a sensation of movement.
Most shutters generate a signal to trigger a flash, if connected. This was quite a complicated matter with mechanical shutters and flashbulbs which took an appreciable time to reach full brightness, but is simple with electronic timers and electronic flash units which fire virtually instantaneously. When using a focal-plane shutter with a flash, a photographer will typically operate the shutter at its X-sync speed or slower; however, some electronic flashes can produce a steady pulse compatible with a focal-plane shutter operated at much faster shutter speeds.
Cinematography uses a rotary disc shutter in movie cameras, a continuously spinning disc which conceals the image with a reflex mirror during the intermittent motion between frame exposure. The disc then spins to an open section that exposes the next frame of film while it is held by the registration pin.
Shutter lag
Shutter lag is the time between pressing the shutter release and the camera responding by taking the picture. While this delay was insignificant on most film cameras, some digital cameras have shutter lag times on the order of hundreds of milliseconds, which may be a minor annoyance to the user.
Projector shutters
In movie projection, the shutter admits light from the lamphouse to illuminate the film across to the projection screen. To avoid flicker, a double-bladed rotary disc shutter admits light two times per frame of film. There are also some models which are triple-bladed, and thus admit light three times per frame .
Time-lapse
Time-lapse photography is a cinematography technique whereby each film frame is captured at a rate much slower than it will be played back. When replayed at normal speed, time appears to be moving faster and thus lapsing. Time-lapse photography can be considered to be the opposite of high speed photography.
Processes that would normally appear subtle to the human eye, such as motion in the sky, become very pronounced. Time-lapse is the extreme version of the cinematography technique of undercranking, and can be considered a borderline form of stop motion animation.
History
Some classic subjects of timelapse photography include:
cloudscapes and celestial motion
plants growing and flowers opening
fruit rotting
evolution of a construction project
people in the city
The technique has also been used to photograph crowds, traffic, and even television. The effect of photographing a subject that changes imperceptibly slowly, is to create a smooth impression of motion. A subject that is changing quickly already is transformed into an onslaught of activity.
The first use of time-lapse photography in a feature film was in Georges Méliès' motion picture Carrefour De L'Opera (1897). Time-lapse photography of biologic phenomena was partially pioneered by F. Percy Smith in 1910 and Roman Vishniac from 1915 to 1918. Time-lapse photography was further pioneered in a series of feature films called Bergfilms, including (Mountain films) by Arnold Fanck, in the 1920s, including The Holy Mountain (1926).
But no filmmaker can be credited for popularizing time-lapse more than Dr. John Ott.
Ott's initial "day-job" career was that of a banker, with time-lapse movie photography, mostly of plants, initially just a hobby. Starting in the 1930s, Ott bought and built more and more time-lapse equipment, eventually building a large greenhouse full of plants, cameras, and even self-built automated electric motion control systems for moving the cameras to follow the growth of plants as they developed. He even time-lapsed his entire greenhouse of plants and cameras as they all worked, a virtual symphony of time-lapse movement. His work was featured on an episode of the request TV show, You Asked For It in the late 1950s.
Ott also discovered that the movement of plants could be manipulated by varying the amount of water plants were given, and varying the color-temperature of the lights in the studio, with some colors causing the plants to flower and other colors causing the plants to bear fruit. Ott even discovered ways to change the gender of plants merely by varying the light source color-temperature.
By using these techniques, Ott time-lapse animated plants "dancing" up and down in synch to pre-recorded music tracks. His cinematography of flowers blooming in such classic documentaries as Walt Disney's Secrets of Life (1956), pioneered the modern use of time-lapse on film and television. Ott wrote a book on the history of his time-lapse adventures, My Ivory Cellar (1958).
Ott's experiments with different colored lighting systems and their effects on the health of plants led to experiments with colored lights on the health of animals, then humans, then on individual cells, using time-lapse micro-photography. Ott discovered that only a full spectrum of natural light (including natural amount of infra-red AND ultra-violet) worked to entirely promote full physical and mental health in plants, animals and humans. A second book detailing these experiments followed, Exploring the Spectrum (1973). Ott-Lights are sold at lighting stores worldwide.
A major refiner and developer of time-lapse is the Oxford Scientific Film Institute in Oxford England. The Institute specializes in time-lapse and slow-motion systems, and has also developed camera systems that could go into (and move through) impossibly small places. Most people have seen at least some of their footage which has appeared in TV documentaries and movies for decades.
PBS's NOVA series aired a full episode on time-lapse (and slow motion) photography and systems in 1981 titled Moving Still. Highlights of Oxford's work are slo-mo shots of a dog shaking water off himself, with close ups of drops knocking a bee off a flower, as well as time-lapse of the decay of a dead mouse.
The first major usage of time-lapse in a feature film was Koyaanisqatsi (1983). The non-narrative film, directed by Geoffrey Regio, contained much time-lapse of clouds, crowds, and cities lensed by cinematographer Ron Fricke.
Countless other films, commercials, TV shows and presentations have included time-lapse.For example, Peter Greenaway's film A Zed & Two Noughts featured a sub-plot involving time-lapse photography of decomposing animals and included a composition called "Time-lapse" written for the film by Michael Nyman. More recently, Adam Zoghlin's time-lapse cinematography was featured in the CBS television series Early Edition, depicting the adventures of a character that receives tomorrow's newspaper today. David Attenborough's 1995 series, The Private Life of Plants, also utilised the technique extensively.
Terminology
The term "time-lapse" can also apply to how long the shutter of the camera is open during the exposure of EACH frame of film (or video), and has also been applied to the use of long-shutter openings used in still photography in some older photography circles. In movies, both kinds of time-lapse can be used together, depending on the sophistication of the camera system being used. A night shot of stars moving as the Earth rotates requires both forms. A long exposure of each frame is necessary to allow the dim light of the stars register on film, with lapses in time between frames providing the actual movement when viewed at normal speed.
As the frame rate of time-lapse approaches normal frame rates, these "mild" forms of time-lapse are sometimes referred to simply as fast motion or (in video) fast forward. This type of borderline time-lapse resembles a VCR in a fast forward ("scan") mode. A man riding a bicycle will display legs pumping furiously while he flashes through city streets at the speed of a racing car. Longer exposure rates for each frame can also produce blurs in the man's leg movements, heightening the illusion of speed.
Two examples of both techniques are the running sequence in Terry Gilliam's The Adventures of Baron Munchausen (1989) in which Eric Idle outraces a speeding bullet, and Los Angeles animator Mike Jittlov's impressive 1980 short and feature-length film, both titled The Wizard of Speed and Time, released to theaters in 1987 and to video in 1989.
When used in motion pictures and on television, fast motion can serve one of several purposes. One popular usage is for comic effect. A slapstick style comic scene might be played in fast motion with accompanying music. (This form of special effect was often used in silent film comedies in the early days of the cinema; see also liquid electricity.) Another use of fast motion is to speed up slow segments of a TV program that would otherwise take up to much of the time allotted a TV show. This allows, for example, a slow scene in a house redecorating show of furniture being moved around (or replaced with other furniture) to be compressed in a smaller allotment of time while still allowing the viewer to see what took place.
The opposite of fast motion is slow motion. Cinematographers refer to fast motion as undercranking since it was originally achieved by cranking a handcranked camera slower than normal. Overcranking produces slow motion effects.
How time-lapse works
Film is often projected at 24 frame/s, meaning that 24 images appear on the screen every second. Under normal circumstances a film camera will record images at 24 frame/s. Since the projection speed and the recording speed are the same the images onscreen appear to move normally. So a film that is recorded at 12 frames per second will appear to move twice as fast. Shooting at camera speeds between 8 and 22 frames usually falls into the undercranked fast motion category, with images shot at slower speeds more closely falling into the realm of time-lapse, although these distinctions of terminology have not been entirely established in all movie production circles.
The same principles apply to video and other digital photography techniques, however until very recently video cameras have not been capable of recording at variable frame rates.
Time-lapse can be achieved with some normal movie cameras by simply clicking individually frames manually. But greater accuracy in time-increments and consistency in the exposure rates of successive frames are better achieved though a device that connects to the camera's shutter system (camera design permitting) called an intervalometer. The intervalometer regulates the motion of the camera according to a specific interval of time between frames. Some intervolometers can also be connected to motion control systems that move the camera on any number of axes as the time-lapse photography is achieved, creating tilts, pans, tracks, and trucking shots as the speeded up motion is viewed. Ron Fricke is the primary developer of such systems, which can be seen in his short film Chronos (1992) and his feature film Baraka (1992, released to video in 2001).
Short Exposure vs. Long Exposure Time-lapse
As first mentioned above, in addition to modifying the speed of the camera, it is also important to consider the relationship between the frame interval and the exposure time. This relationship essentially controls the amount of motion blur present in each frame and it is, in principle, exactly the same as adjusting the shutter angle on a movie camera.A film camera normally records film at twenty four frames per second. During each 24th of a second the film is actually exposed to light for roughly half the time. The rest of the time it is hidden behind the shutter. Thus exposure time for motion picture film is normally calculated to be one 48th of a second (1/48 second, often rounded to 1/50 second). Adjusting the shutter angle on a film camera (if its design allows) can add or reduce the amount of motion blur by changing the amount of time that the film frame is actually exposed to light. In time-lapse photography the camera records images at a specific slow interval such as one frame every thirty seconds (1/30 frame/s). The shutter will be open for some portion of that time. In short exposure time-lapse the film is exposed to light for a normal exposure time over an abnormal frame interval. So for example the camera will be set up to expose a frame for 1/50th of a second every 30 seconds. Such a setup will create the effect of an extremely tight shutter angle giving the resulting film a stop-animation or clay-mation quality.
In long exposure time-lapse the exposure time will approximate the effects of a normal shutter angle. Normally this means that the exposure time should be half of the frame interval. Thus a 30 second frame interval should be accompanied by a 15 second exposure time to simulate a normal shutter. The resulting film will appear smooth. The exposure time can be calculated based on the desired shutter angle effect and the frame interval with the equation:
Long exposure time-lapse is less common because it is often difficult to properly expose film at such a long period, especially in daylight situations. A film frame that is exposed for 15 seconds will receive 750 times more light than its 1/50th of a second counterpart. (Thus it will be more than 9 stops over normal exposure.) A scientific grade neutral density filter can be used to alleviate this problem.
Time-lapse camera movement
As also earlier mentioned, some of the most stunning time-lapse images are created by moving the camera during the shot. A time-lapse camera can be mounted to a moving car for example to create a notion of extreme speed.However to achieve the effect of a simple tracking shot it is necessary to use motion control to move the camera. A motion control rig can be set to dolly or pan the camera at a glacially slow pace. When the image is projected it could appear that the camera is moving at a normal speed while the world around it is in time lapse. This juxtaposition can greatly heighten the time-lapse illusion.
The speed that the camera must move to create a perceived normal camera motion can be calculated by inverting the time-lapse equation:
Baraka was one of the first films to use this effect to its extreme. Director and cinematographer Ron Fricke designed his own motion control rigs that utilized stepper motors to pan, tilt and dolly the camera.
A panning timelapse can also be easily and inexpensively achieved by using a widely available telescope Equatorial mount with a Right ascension motor (*360 degree example using this method). Two axis pans can be achieved as well with contemporary motorized telescope mounts .A variation of these are rigs that move the camera DURING exposures of each frame of film, blurring the entire image. Under controlled conditions, usually with computers carefully making the movements during and between each frame, some exciting blurred artistic and visual effects can be achieved, especially when the camera is also mounted onto a tracking system of its own that allows for its own movement through space.
HDR Time-lapse
The most recent development in time-lapse cinematography is a convergence of High dynamic range imaging (photographic technique) and multi frame time-lapse. One of the first experiments was a 11 second series completed in un-automated form by Nicholas Phillips on July 8,2006 link. Modern time-lapse enthusiasts have started to follow suit as of May 2007. Ollie Larkin and Jay Burlage have both successfully shot and processed HDR time-lapse footage in High definition, with motion control, using DSLR cameras.
Slow motion
Slow motion is a technique in filmmaking whereby time appears to be slowed down. It was invented by Austrian August Musger. Typically this is achieved when each film frame is captured at a rate much faster than it will be played back. When replayed at normal speed, time appears to be moving slower. The technical term for slow motion is overcranking, referring to the concept of cranking a handcranked camera faster than normal (i.e. faster than 24 frames per second). High-speed photography is a more sophisticated technique that uses specialized equipment to record fast phenomena, usually for scientific applications.
Slow motion is ubiquitous in modern filmmaking. It is used by diverse directors to achieve diverse effects. Some classic subjects of slow motion include:
Athletic activities of all kinds, to demonstrate skill and style.
To recapture a key moment in an athletic game, typically shown as a replay.
Natural phenomena, such as a drop of water hitting a glass.
Slow motion can also be used for artistic effect, to create a romantic or suspenseful aura or to stress a moment in time. Vsevolod Pudovkin, for instance, used slow motion in a suicide scene in The Deserter, in which a man jumping into a river seems sucked down by the slowly splashing waves. Another example is Face/Off, in which John Woo used the same technique in the movements of a flock of flying pigeons. The Matrix made a distinct success in applying the effect into action scenes through the use of multiple cameras, as well as mixing slow-motion with live action in other scenes. Japanese director Akira Kurosawa was a pioneer using this technique in his 1954 movie Seven Samurai. American Sam Peckinpah was another classic lover of the use of slow motion. The technique is especially associated with explosion effect shots and underwater footage. Slow motion was also used extensively in the film Final Fantasy VII: Advent Children.
The opposite of slow motion is fast motion. Cinematographers refer to fast motion as undercranking since it was originally achieved by cranking a handcranked camera slower than normal. It is often used for comic effect, time lapse or occasional stylistic effect.
The concept of slow motion may have existed before the invention of the motion picture: the Japanese theatrical form Noh employs very slow movements.
How slow motion works
There are two ways in which slow motion can be achieved in modern cinematography. Both involve a camera and a projector. A projector refers to a classical film projector in a movie theatre, but the same basic rules apply to a television screen and any other device that displays consecutive images at a constant frame rate.
Overcranking
Overcranking is a technique that achieves slow motion by photographing images at a faster rate than they will be projected. Normally great care is taken to ensure that the camera will record sequential images at the same rate that they will eventually be projected. When a faster camera speed is selected, the projection rate remains the same. The result is that photographed movement will appear to be slowed down. The change in speed of the onscreen image can be calculated by simply dividing the projection speed by the camera speed.
Most video cameras do not allow the operator to select a frame speed faster than the projection speed. For this reason, overcranking is sometimes referred to as film slow motion because it is most often achieved with film cameras. Digital overcranking is currently rare.
Time stretching
The second type of slow motion is achieved during post production. This is known as time-stretching or digital slow motion. This type of slow motion is achieved by inserting new frames in between frames that have actually been photographed. The effect is similar to overcranking as the actual motion occurs over a longer time.
Since the necessary frames were never photographed, new frames must be fabricated. Sometimes the new frames are simply repeats of the proceeding frames but more often they are created by interpolating between frames. (Often this interpolation is effectively a short dissolve between still frames). Many complicated algorithms exist that can track motion between frames and generate intermediate frames that appear natural and smooth. However it is understood that these methods can never achieve the clarity or smoothness of its overcranking counterpart.
Traditionally, frames were duplicated on an optical printer. True frame interpolation can only be done digitally.
Simple replication of the same frame twice is also sometimes called half-speed. This relatively primitive technique (as opposed to digital interpolation) is often visually detectable by the casual viewer. It was used in certain scenes in Tarzan, the Ape Man, and critics pointed it out. Sometimes lighting limitations or editorial decisions can require it. A wide-angle shot of Roy Hobbs swinging the bat, in the climactic moments of The Natural, was printed at half-speed in order to simulate slow-motion, and the closeup that immediately followed it was true overcranked slow-motion.
A VCR may have the option of slow motion playback, sometimes at various speeds; this can be applied to any normally recorded scene. It is similar to half-speed, and is not true slow-motion, but merely longer display of each frame.
Bipack
In cinematography, bipacking, or a bipack, is the process of loading two reels of film into a camera, so that they both pass through the camera gate together. It was used both for in-camera effects (effects that are nowadays mainly achieved via optical printing) and as an early subtractive colour process.
Use as a colour process
Eastman, Agfa, Gevaert, and DuPont all manufactured bipack film stock for use as a colour process from 1920s onwards. Two strips of film, each sensitized to a primary colour (generally the combination of red and blue or red and green) would be exposed with the emulsion layers in contact with each other, resulting in one of the two negatives being reversed. As these negatives were frequently contact printed onto duplitized stock for toning processes such as those commencing with Prizma, however, this worked to the advantage of the laboratory. The most famous version of Technicolor, the full-colour three-strip Technicolor Process 4 used from 1934 to 1954, exposed two of the three strips—the blue and red images—in bipack.
Use as an in-camera effect
To achieve the in-camera effect, a reel would be made up of pre-exposed and developed film, and unexposed raw film, which would then be loaded into the camera. The exposed film would sit in front of the unexposed film, with the emulsion of both films touching each other, causing the images on the exposed film to be contact-printed onto the unexposed stock, along with the image from the camera lens. This method, in conjunction with a static matte placed in front of the camera, could be used to print angry storm clouds into a background on a studio set. The process differs from Optical Printing in that no optical elements (lenses, field lenses, etc) separate the two films. Both films are sandwiched together in the same camera and make use of a phenomenon known as contact printing.
The process had its beginnings in providing a repeatable method of compositing live action and matte paintings, allowing the painted section of the final image to be completed later, and not tying up the set/sound-stage whilst the artist matched the painting to the set. It also alleviated the considerable difficulties caused by matching shadows on the painting to the set on an open-air set. The process worked equally well for matting-in real water to a model, or a model skyline to live action. The process was also referred to as the Held Take process. Perhaps the most famous example of a held take is the long shot of astronauts clambering down into a lunar excavation in 2001: A Space Odyssey.
The technique, if used with a camera not specially designed for contact printing, runs the risk of jamming the camera, due to the double thickness of film in the gate, and damaging both the exposed and unexposed stock. On the other hand, because both strips of film are in contact and are handled by the same film transport mechanism at the same time, registration is kept very precise. Special cameras designed for the process were manufactured by Acme and Oxberry, amongst others, and these usually featured an extremely precise registration mechanism specially designed for the process. These process cameras are usually recognisable by their special film magazines, which look like two standard film magazines on top of each other. The magazines allow the separate loading of exposed and unexposed stock, as opposed to winding the two films onto the same reel.
The bipack process, which is a competing method to optical printing, was used until digital methods of compositing became predominant in the industry. Industrial Light and Magic used a specially-built rig, built for The Empire Strikes Back that utilised the method to create matte painting composites.
The Dunning Process
Various improvements and extensions of the process followed, the most famous being Carroll D. Dunning's, an early method built on the bipacking technique and used for creating travelling mattes. It is described thus:
The foreground action is lighted with yellow light only in front of a uniform, strongly lighted blue backing. Panchromatic negative film is used in the camera as the rear component of a bipack in which the front film is a positive yellow dye image of the background scene. This yellow dye image is exposed on the negative by the blue light from the backing areas, but the yellow light from the foreground passes through it and records an image of the foreground at the same time.
The Dunning Process, often in shorthand referred to as "process," was used in many black and white, most notably King Kong. Its chief limitation was that it could not be used for colour photography, and the process died out with the increasing move toward production of films in colour.
Slit-scan photography
The slit-scan photography technique is a photographic and cinematographic process where a moveable slide, into which a slit has been cut, is inserted between the camera and the subject to be photographed.
Slit Scan is also the name of a media business in William Gibson's Idoru.
Use in cinematography
Originally used in static photography to achieve blurriness or deformity, the Slit-Scan technique was perfected for the creation of spectacular animations. It enables the cinematographer to create a psychedelic flow of colors. This type of effect is now created through computer animation. Slit-Scan, however, is a mechanical technique, adapted for film by Douglas Trumbull during the production of Stanley Kubrick's 2001: A Space Odyssey for the "stargate sequence". It requires an imposing machine, capable of moving the camera and its support. This type of effect was revived in other productions, for films and television alike. For instance, Slit-Scan was used in Star Trek: The Next Generation to create the "stretching" of the starship Enterprise-D when it engaged warp drive. Due to the expense and difficulty of this technique, the same three warp-entry shots, all created by Industrial Light and Magic for the series pilot, were reused throughout the series virtually every time the ship went into warp. Slit-scan was also used, by Bernard Lodge, to create the Doctor Who title sequences from December 1973 to 1980.
Description
Slit-scan is an animation created image by image. Its principle is based upon the camera’s relative movement in relation to a light source, combined with a long exposure time. The process is as follows:

An abstract colored design is painted on a transparent support
This support is set down on the glass of a backlighting table and covered with an opaque masking into which one or more slits have been carved.
The camera (placed high on top of a vertical ramp and decentered in relation to the light slits) takes a single photograph while moving down the ramp. The result: at the top of the ramp, when it is far away, the camera takes a rather precise picture of the light slit. This image gets progressively bigger and eventually shifts itself out of the frame. This produces a light trail, which meets up with the edge of the screen.
These steps are repeated for each image, lightly peeling back the masking, which at the same time produces variation in colors as well as variation of the position of the light stream, thus creating the animation.
Naturally, this effect is very time-consuming, and thus expensive, to create. A 10-second sequence requires a minimum of 240 adjustments.
Infrared photography
In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Usually an "infrared filter" is used; this lets infrared (IR) light pass through to the camera but blocks all or most of the visible light spectrum (and thus looks black or deep red).
When these filters are used together with infrared-sensitive film or sensors, very interesting "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect."
The effect is mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. Chlorophyll is transparent at these wavelengths and so does not block this reflectance (see Red edge). There is a small contribution from chlorophyll fluorescence, but this is extremely small and is not the real cause of the brightness seen in infrared photographs.
The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering (respectively) in the atmosphere compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.
History
Until the early 1900s, infrared photography was not possible because silver halide emulsions are not sensitive to infrared radiation without the addition of a dye to act as a color sensitizer. The first infrared photograph was published in 1910 by Robert W. Wood, who discovered the unusual color effects that now bear his name. Wood's photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes.
Infrared-sensitive photographic plates were developed in the United States during World War I for improved aerial photography.
False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film, Type 8443, in the 1960s.Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank Zappa and the Grateful Dead all issued albums with infrared cover photos. The unexpected colors and effects that infrared film can produce fit well with the psychadelic aesthetic that emerged in the late 1960s. Infrared photography can easily look gimmicky, but photographers such as Elio Ciol have made subtle use of black-and-white infrared-sensitive film.
Focusing infrared
Most manual focus 35mm SLR and medium format SLR lenses have a red dot, line or diamond, often with a red "R" called the infrared index mark, that can be used to achieve proper infrared focus (some autofocus lenses no longer have the mark). When a single-lens reflex (SLR) camera is fitted with a filter that is opaque to visible light, the reflex system becomes useless for both framing and focusing, and a tripod is necessary for composition without the filter before the exposure is done with the filter attached. A sharp infrared photograph can be done with a tripod, a narrow aperture (like f/22) and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos if the lens is meticulously refocused to the infrared index mark.
Most apochromatic ('APO') lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not require this adjustment because mirrors do not suffer from chromatic aberration.
Zoom lenses may scatter more light through their more complicated optical systems than prime lenses, i.e. an infrared photo taken with a 50mm prime lens may look more contrasty and sharper than the same image taken at 50mm with a 28 to 80 zoom.
Film cameras
Infrared negatives fogged by the frame counter of a Minolta Maxxum 4.
Many conventional cameras can be used for infrared photography, where infrared is taken to mean light of a wavelength only slightly longer than that of visible light. Photography of rather longer wavelengths is normally termed thermography and requires special equipment.
With some patience and ingenuity, most film cameras can be used. However, some cameras of the 1990s that used 35mm film have infrared sprocket-hole sensors that can fog infrared film (their manuals may warn against the use of infrared film for this reason). Other film cameras are not completely opaque to infrared light.
Black-and-white infrared film
Black-and-white infrared negative films are sensitive to wavelengths in the 700 to 900 nm near infrared spectrum, and most also have a sensitivity to blue light wavelengths. The notable halation effect or glow often seen in the highlights of infrared photographs is an artifact of Kodak High Speed Infrared (HIE) black-and-white negative film and not an artifact of infrared light. The glow or blooming is caused by the absence of an anti-halation layer on the back side of Kodak HIE film, this results in a scattering or blooming around the highlights that would usually absorbed by the anti-halation layer in conventional films. The majority of black-and-white infrared art, landscape, and wedding photography is done using orange (15 or 21), red (23, 25, or 29) or visually opaque (72) filters over the lens to block the blue visible light from the exposure. The intent of filters in black-and-white infrared photography is to block blue wavelengths and allow infrared to pass through. Without filters, infrared negative films look much like conventional negative films because the blue sensitivity lowers the contrast and effectively counteracts the infrared look of the film. Some photographers use orange or red filters to allow slight amounts of blue wavelengths to reach the film, and thus lower the contrast. Very dark-red (29) filters block out almost all blue, and visually opaque (70, 89b, 87c, 72) filters block out all blue and also visible-red wavelengths, resulting in a more pure-infrared photo that usually looks more contrasty.
Certain infrared-sensitive films like Kodak HIE must only be loaded and unloaded in total darkness. Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film.
Arguably the greatest obstacle to infrared film photography is the increasing difficulty of obtaining infrared-sensitive film; where it is still sold the price can be twice as much as it was before the digital camera era.
Color infrared film
Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green and green as blue. All three layers are sensitive to blue so the film must be used with a -blue (i.e., yellow) filter. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be loaded in total darkness (despite what it said on the can), or refocused to the infrared index mark on the lens.
In 2007 Kodak announced that production of the 35mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. It is assumed that the 70mm Aerographic format will continue.
There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production.
Digital cameras
Digital camera sensors are sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently than visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on your subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a 'hot spot' in the centre of the image as their coatings are optimised for visible light and not for IR.
An alternative method of digital SLR infrared photography is to remove the infrared blocker in front of the CCD and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally - handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared reflection. When the IRblocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography.
Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.Several Sony cameras have the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was 'restricted' by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera's sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artefacts.
Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity.
Satellite sensors and thermographic cameras are sensitive to longer wavelengths of infrared, and use a variety of technologies which may not resemble common camera or filter designs. In particular, they often require cooling, since at these wavelengths, and room temperature, all objects (including the camera body, the optics, and the detector itself) are glowing all the time.
Cellphone
A Japanese company, Yamada Denshi, offers an IR camera attachment for some cellphones. But with this attachment, the cellphone camera can see through some types of clothing. For this reason, online retailers don't sell the attachment; it is available only from a few stores.
To protect people's privacy, given the potential availability of "see-through-clothing" cellphone cameras, one Japanese clothing company is producing underwear made with nylon and polyurethanethat blocks infrared radiation.

Friday, October 26, 2007

Visual Effects

Internal effects in Visual Effects
Miniature effect (Models: miniature sets and models, animatronics )
In the field of special effects a miniature effect is a special effect generated by the use of scale models. Scale models are often combined with high speed photography to make gravitational and other effects scale properly.
Where a miniature appears in the foreground of a shot, this is often very close to the camera lens - for example when matte painted backgrounds are used. Since the exposure is set to the object being filmed so the actors appear well lit, the miniature must be over-lit in order to balance the exposure and eliminate any depth of field differences that would otherwise be visible. This foreground miniature usage is referred to as forced perspective. Another form of miniature effect uses stop motion animation.
Use of scale models in the creation of visual effects by the entertainment industry dates back to the earliest days of cinema. Models and miniatures are copies of people, animals, buildings, settings and objects. Miniatures or models are used to represent things that do not really exist, or that are too expensive or difficult to film in reality, such as explosions, floods or fires.
Early History (1900-1976)
French director Georges Méliès incorporated special effects in his 1902 film "Le Voyage dans la Lune" (A Trip to the Moon) — including double-exposure, split screens, miniatures and stop-action.
Some of the most influential visual effects films of these early years such as Metropolis, The Ten Commandments, Citizen Kane, and 2001: A Space Odyssey utilized miniatures.
In the early 1970s, miniatures were often used to depict disasters in such films as The Poseidon Adventure, Earthquake and The Towering Inferno.
The Golden Years (1977 - 1993)
In the days before widespread use of computer generated imagery was practical, miniatures were a common tool in the visual effects artist's arsenal.
Iconic film sequences such as the tanker truck explosion from The Terminator and the bridge destruction in True Lies were achieved through the use of large-scale miniatures.
1993 saw the release of Jurassic Park which for many marked the turning point in the use of computers to create illusions, for which models and miniatures would have previously been employed.
Modern Use
While the use of computer generated imagery has largely overtaken physical models and miniatures in recent years, they are still often employed, especially for projects requiring physical interaction with fire, explosions or water.
Techniques
Kit-Bashing
Carpentry
Vacuum Forming
Molding and Casting
Fiberglass
Welding
Rapid Prototyping
Motion Control Photography
Slurpasaur
Slurpasaur (or Slurposaur) is a nickname given to optically enlarged lizards that are presented as dinosaurs in motion pictures.
In spite of the pioneering work of Willis O'Brien and others in making stop-motion animated dinosaurs since the early days of cinema, producers have used optically enlarged lizards (often with horns and fins glued on) to represent dinosaurs to cut costs as, it was felt, the public saw dinosaurs as being simply giant lizards. The first major use of the 'slurpasaur' was in One Million B.C. (1940) and, indeed, the special effects in this film were re-used often (in, for example, the 1955 movie King Dinosaur). Other notable films with 'slurpasaurs' include Journey to the Center of the Earth and The Lost World (1960). The latter is notable for a 'dinosaur battle' wherein a monitor lizard and a young crocodile fight each other for real. The former is a rare example of lizards actually being convincing in their role- the lizards are supposed to be Dimetrodons and actually look superficially similar to those creatures. The public eventually became too sophisticated to accept 'slurpasaurs' as convincing dinosaurs a factor which (together with the obvious animal cruelty aspect) led to their disappearance from the special effects arsenal.
Audio-Animatronics
Audio-Animatronics is the registered trademark for a form of robotics created by Walt Disney Imagineering for shows and attractions at Disney theme parks, and subsequently expanded on and used by other companies. The robots move and make noise, generally speech or song. An Audio-Animatronic is different from android-type robots in that it works off prerecorded moves and sounds, rather than processing external stimuli and responding to them. Animatronics has become a generic name for similar robots created by firms other than Disney.
Creation and early development
Audio-Animatronics were originally a creation of Lee Adams, who started his career with Disney as an electrician at the Burbank studio and was one of Disney's original Imagineers. The first Disney Audio-Animatronic was the giant squid in the movie 20,000 Leagues Under the Sea, which was created by Adams, based on a book of the same title by Jules Verne. It had pumps connected to the tentecals. When a pump was activated, air filled the tentacles, making them go up. When air left the tentacles, they coiled up. The term "Audio-Animatronics" was first used commercially by Disney in 1961, was filed as a trademark in 1964, and was registered in 1967.
Perhaps the most impressive of the early Audio-Animatronics efforts was The Enchanted Tiki Room, which opened in 1963 at Disneyland, where a room full of tropical creatures synchronize eye and facial action with a musical score entirely by electro-mechanical means. The "cast" of the musical revue used tones recorded on tape which vibrated a metal reed that closed a circuit to trigger a relay which sent a pulse of electricity to a mechanism that causes a pneumatic valve to move a part of the figure's body. The movements of the attraction's birds, flowers and tiki idols were triggered by sound, hence the audio prefix. Figures' movements had a neutral "natural resting position" that the limb/part would return to when there was no electric pulse. The animation was all on/off moves, such as an open/closed eye or beak. On/off movement was called a digital system. Other early examples were the Lincoln Exhibit presented at the State of Illinois Pavilion at the 1964 New York World's Fair. Also at the fair were three other pavilions featuring Audio-Animatronics. They were Pepsi/UNICEF's "it's a small world", General Electric's Carousel of Progress, and Ford's Magic Skyway.
Inner workings
Pneumatic muscles were not powerful enough to move larger objects, like an artificial human arm, so hydraulics were used for large figures. On/off movement would cause an arm to be either up over the artificial man's head (on switch), or down (off switch), but no movement in between. To create realistic in-between movement in large figures, an analog system was used. This gave the figure's limbs/parts a full range of in-between motion, rather than only two positions. The digital system was used with small pneumatic moving limbs (eyelids, beaks, fingers), and the analog system was used for large hydraulic human or animal (arms, heads) moving limbs. To permit a high degree of freedom, the control cylinders resemble typical miniature pneumatic or hydraulic cylinders, but mount the back of the cylinder on a ball joint and threaded rod. This ball joint permits the cylinders to float freely inside the frame, such as when the wrist joint rotates and flexes. Disney's technology is not infallible however; the oil-filled cylinders do occasionally drip or leak. It is sometimes necessary to do makeup touch-up work, or to strip the clothing off a figure due to leaking fluids inside. The Tiki Room remains a pneumatic theatrical set, primarily due to the leakage concerns -- Disney does not want hydraulic fluids dripping down onto the audience during a show.
Because each individual cylinder requires its own control/data channel, the original audio-animatronic figures were relatively simple in design to reduce the number of necessary channels. The first human designs (referred to internally by Disney as series A-1) for example included all four fingers of the hand as one actuator. With modern digital computers and vast data storage, the number of channels is virtually unlimited. The current versions (series A-100) for example now have individual actuators for each finger.
Compliance is a new technology that gives the animatronic figures faster, more realistic motion. In the older figures, a fast limb movement would cause the figure to shake in a weird unnatural way. So, the movements had to be slowed. Speed was sacrificed to gain control. This was frustrating for animators who wanted some faster movements. The new compliance tech allows fast movements with control too. It works by allowing a limb to pass the point where it is commanded to stop, and slow to a stop, instead of an immediate stop. This absorbs shock, much like the shock absorbers on a car or the natural shock absorption in a living body.
The skin of an AA is made from silicone rubber. Because the neck is so much narrower than the rest of the skull, the skull skin cover has a zipper up the back to permit easy removal. The facial appearance is painted onto the rubber, and standard cosmetic makeup is also used. Over time the flexing causes the paint to loosen and fall off, so occasional makeup work and repainting is required. Generally as the rubber skin flexes, the stress causes it to dry and begin to crack. Figures that do not have a high-degree of motion flexibility (such as the older A-1 series Lincoln) may only need the skin to be replaced every 10 years. The most recent A-100 series human AA's (such as for Bill Clinton) also include flexion actuators that move the cheeks and eyebrows to permit more realistic expressions, but the skin wears out more quickly and needs replacement at least every five years. The wig on each human AA is made from natural human hair for the highest degree of realism, but using real hair creates its own problems since the changing humidity and constant rapid motions of the moving AA carriage hardware throughout the day cause the hair to slowly lose its styling, requiring touch-ups before each day's showing.
Variations of Audio-Animatronics
The technology of the AAs at the theme parks around the world vary in their sophistication. They range from the blinking and mouth movents at Walt Disney's Enchanted Tiki Room to full body movement, from the mouth to the tip of the fingers at Stitch's Great Escape! at the Magic Kingdom. Current technologies have paved the way for more elaborate AA figures, such as the 'Ursula head' at Mermaid Lagoon Theater at Tokyo DisneySea, the Indiana Jones figures inside the Indy attractions at both Disneyland & Tokyo DisneySea, the 'swordfighting' pirates inside Disneyland Paris’ version of Pirates of the Caribbean, the "lava/rock monster" inside Journey to the Center of the Earth at Tokyo DisneySea, the "Yeti" inside Expedition Everest at Disney's Animal Kingdom, or the Roz figure in the Disney's California Adventure attraction "Monsters, Inc. Mike & Sulley to the Rescue!". In the case of the Roz figure, Disney makes the figure seemingly 'interact' with guests with help from an unseen ride operator who chooses pre-recorded messages for Roz to 'speak', thereby seeming to ‘react’ to individual guests' unique appearances/clothing. One of the newest figures comes with changes to the classic attraction, "Pirates of the Caribbean" at the two American resorts, both now featuring characters from the Pirates of the Caribbean film series. The Jack Sparrow figure is based on his portrayor Johnny Depp, even featuring his voice and facial mold.
Disney attractions that have utilized Audio-Animatronics
Disneyland Resort
Disneyland
Main Street, U.S.A.
Grand Canyon/Primeval World dioramas (part of the Disneyland Railroad)
Great Moments with Mr. Lincoln (on 'hiatus', with a planned return in 2007)
Adventureland
Walt Disney's Enchanted Tiki Room
Indiana Jones Adventure: Temple of the Forbidden Eye
Jungle Cruise
New Orleans Square
The Haunted Mansion
Pirates of the Caribbean
Club 33 (Inactive as of 2006)
Frontierland
Mark Twain Riverboat
Big Thunder Mountain Railroad
Mine Train Through Nature's Wonderland (since removed)
Critter Country
Splash Mountain
Country Bear Jamboree (since removed)
Fantasyland
"it's a small world"
Matterhorn Bobsleds
Tomorrowland
Star Tours
Buzz Lightyear Astro Blasters
Finding Nemo Submarine Voyage
Submarine Voyage thru Liquid Space (since removed)
Innoventions
America Sings (since removed)
Flight to the Moon (since removed)
Mission to Mars(since removed)
Carousel of Progress (since moved to Walt Disney World's Magic Kingdom)
Parades
Walt Disney's Parade of Dreams
Disney's California Adventure
a bug's land
It's Tough to be a Bug!
Hollywood Pictures Backlot
Jim Henson's Muppet*Vision 3D
Monsters, Inc. Mike & Sulley to the Rescue!
Lucky the Dinosaur
Walt Disney World Resort
The Magic Kingdom
Adventureland

The Enchanted Tiki Room (Under New Management)
Jungle Cruise
Pirates of the Caribbean
Liberty Square
The Hall of Presidents
The Haunted Mansion
Frontierland
Big Thunder Mountain Railroad
Splash Mountain
Country Bear Jamboree
Fantasyland
It's a Small World
Mickey's PhilharMagic
Mickey Mouse Revue (since moved to Tokyo Disneyland)
20,000 Leagues Under the Sea: Submarine Voyage (since removed)
Tomorrowland
The Carousel of Progress
Space Mountain
Stitch's Great Escape!
Buzz Lightyear's Space Ranger Spin
Flight to the Moon (since removed)
Mission to Mars (since removed)
The Timekeeper (since removed)
ExtraTERRORestrial Alien Encounter (since removed)
Epcot
Future World
Spaceship Earth
Innoventions
Universe of Energy
Journey Into Imagination
The Land
Living with the Land
Food Rocks (since removed)
Kitchen Kabaret (since removed)
Horizons (since removed)
Communicore (since removed)
World of Motion (since removed)
World Showcase
The American Adventure
Maelstrom
Gran Fiesta Tour Starring The Three Caballeros
Disney-MGM Studios
Streets of America Area
Star Tours
Jim Henson's Muppet*Vision 3D
Hollywood Blvd. Area
The Great Movie Ride
Disney's Animal Kingdom
Discovery Island
It's Tough to be a Bug!
DinoLand U.S.A.
DINOSAUR (formerly Countdown to Extinction)
Lucky the Dinosaur (since removed)
Asia
Expedition Everest
Tokyo Disney Resort
Tokyo Disneyland
Adventureland
Primeval World diorama (as part of Western River Railroad)
Jungle Cruise
The Enchanted Tiki Room: Get The Fever!
Pirates of the Caribbean
Critter Country
Splash Mountain
Westernland
Country Bear Jamboree
Mark Twain Riverboat
Big Thunder Mountain Railroad
Fantasyland
Cinderella Castle Mystery Tour (since removed)
It's a Small World
The Haunted Mansion / Haunted Mansion Holiday Nightmare
Pooh's Hunny Hunt
The Mickey Mouse Revue
Tomorrowland
Star Tours
Buzz Lightyear's Astro Blasters
Visionarium (since removed)
Meet the World (since removed)
Tokyo DisneySea
Arabian Coast
Sinbad's Storybook Voyage (formerly Sinbad's Seven Voyages)
Magic Lamp Theater
Port Discovery
StormRider
Mermaid Lagoon
Mermaid Lagoon Theater
New York Harbor
Tower of Terror
Lost River Delta
Indiana Jones Adventure: The Temple of the Crystal Skull
Mysterious Island
Journey to the Center of the Earth
20,000 Leagues Under the Sea
Disneyland Resort Paris
Disneyland Park
Frontierland

Phantom Manor
Big Thunder Mountain
Adventureland
Pirates of the Caribbean
Colonel Hathi's Pizza Outpost restaurant (currently semi-operative)
Fantasyland
It's a Small World
La Tanière du Dragon
Discoveryland
Buzz Lightyear Laser Blast
Le Visionarium (since removed)
Les Mystères du Nautilus
Star Tours
Hong Kong Disneyland Resort
Hong Kong Disneyland
Adventureland
Jungle Cruise
Lucky the Dinosaur
Fantasyland
Mickey's PhilharMagic
It's A Small World (Opening 2008)
Tomorrowland
Buzz Lightyear Astro Blasters
Other uses of animatronic figures
Animatronics also gained popularity in the 1980s through use at family entertainment centers such as Showbiz Pizza Place and Chuck E. Cheese's Pizza Time Theatre. They are also used in film and TV special effects. Several passengers and crew of a Pioneer Zephyr are represented in a display of this historic train at Chicago's Museum of Science and Industry. Neatly dressed in the proper style of first class passengers of their era, one remarks upon the casual dress of the visitors.
Matte painting
Matte paintings are used to create "virtual sets" and "digital backlots". They can be used to create entire new sets, or to extend portions of an existing set. Traditional matte painting is done optically, by painting on top of a piece of glass to be composited with the original footage. However this trick is as old, as Film itself (circa 1911). Originally, matte paintings were painted on glass plates, earlier also named glasshot. Two plates of glass are set up parallel to each other at a certain distance. The camera is set up in front of them. On the rear plate, there is a background landscape which is relatively rough, for example a painted Jungle. On the foreground plate, detail-rich elements are painted such as small plants, stones etc. Between the glass plates, one could then encourage a puppet in stop-motion. Nowadays, matte painting is done in computers with the use of a tablet as a drawing device. In a digital environment, matte paintings can also be done in a 3-D environment, allowing for 3-D camera movements. The first film to use a digital matte painting was Die Hard 2: Die Harder. It was used during the last scene, which took place on an airport runway.
Keying (graphics)
In graphics and visual effects, keying is an informal term for compositing two full frame images together, by discriminating the visual information into values of color and light.
Chroma key
A chroma key is the removal of a color from one image to reveal another "behind" it.
Luma key
A luma key similarly applies transparency (or Alpha channel) to regions (pixels) in an image which fall into a particular range of brightness. This technique is less controllable, but can be used on graphic elements. It is particularly useful for realistic fire keying, and was also used for on-screen text, such as programme titles and credits, before the advent of digital compositing.
Difference key
A difference key uses a background plate of the scene that the foreground object is being keyed out of and the software then assesses the source video and any pixels that don't match the grid are obviously meant to be keyed out. For example, if your subject is standing in front of a wall, a photo taken from the camera of the same wall, is used. This must be taken from the same camera angle, focus & distance. The software then compares the video to be keyed with the original photo and generates a mask based upon that difference, hence the name explain well.
Matte key
A matte key uses three images: the two images that will be composited, and a black-and-white third image (called "mask") that dictates the blending of the two, with white revealing one image, black the other, and grey revealing a blend of the two together.
Generally, the "bottom" image is called the beauty, the image that appears on top is the fill and the discriminating element (chroma, luma or matte) is called the key or matte.
Downstream key
A downstream key (or DSK) is a method of Matte keying, so you use three image or video signals. You have the base signal, where the fill signal is keyed on to, using the key signal to control the opacity of the fill signal. This results in a new signal, that you can use as an input for another DSK. This technique is used in television production, where you show the station name in a corner (DSK 1), and need a name title for a guest (DSK 2), while showing subtitles for translation of his/her speech (DSK 3).
Rotoscoping
Rotoscoping is an animation technique in which animators trace over live-action film movement, frame by frame, for use in animated films. Originally, pre-recorded live-action film images were projected onto a frosted glass panel and re-drawn by an animator. This projection
equipment is called a rotoscope, although this device has been replaced by computers in recent years. More recently, the rotoscoping technique has been referred to as interpolated rotoscoping.
History
The technique was invented by Max Fleischer, who used it in his series Out of the Inkwell starting around 1915, with his brother Dave Fleischer dressed in a clown outfit as the live-film reference for the character Koko the Clown.
Fleischer used rotoscope in a number of his later cartoons as well, most notably the Cab Calloway dance routines in three Betty Boop cartoons from the early 1930s, and the animation of Gulliver in Gulliver's Travels (1939). The Fleischer studio's most effective use of rotoscoping was in their series of action-oriented Superman cartoons, in which Superman and the other animated figures displayed very realistic movement. The Leon Schlesinger animation unit at Warner Brothers, producing cartoons geared more towards exaggerated comedy, used rotoscoping only occasionally. Walt Disney and his animators employed it carefully and very effectively in Snow White and the Seven Dwarfs in 1937. Rotoscoping was also used in many of Disney's subsequent animated feature films with human characters, such as Cinderella in 1950. Later, when Disney animation became more stylized (e.g. One Hundred and One Dalmatians, 1961), the rotoscope was used mainly for studying human and animal motion, rather than actual tracing. Rotoscoping was used extensively in China's first animated feature film, Princess Iron Fan (1941), which was released under very difficult conditions during the Second Sino-Japanese War and World War II. It was used extensively in the Soviet Union, where it was known as "Éclair", from the late 1930s to the 1950s; its historical use was enforced as a realization of Socialist Realism. Most of the films produced with it were adaptations of folk tales or poems - for example, The Night Before Christmas or The Tale of the Fisherman and the Fish. Only in the early 1960s, after the Khrushchev Thaw, did animators start to explore very different aesthetics. Ralph Bakshi used the technique quite extensively in his animated movies Wizards (1977), The Lord of the Rings (1978), American Pop (1981), and Fire and Ice (1983). Bakshi first turned to rotoscoping because he was refused by 20th Century Fox for a $50,000 budget increase to finish Wizards, and thus had to resort to the rotoscope technique to finish the battle sequences. (This was the same meeting at which George Lucas was also denied a $3 million budget increase to finish Star Wars.) Rotoscoping was also used in Heavy Metal (1981), the a-ha music video "Take on Me" (1985), and Don Bluth's Titan A.E. (2000). While rotoscoping is generally known to bring a sense of realism to larger budget animated films, the American animation company Filmation, known for its budget-cutting limited TV animation, was also notable for its heavy usage of rotoscope to good effect in series such as Flash Gordon, Blackstar and He-Man and the Masters of the Universe. Smoking Car Productions invented a digital rotoscoping process in 1994 for the creation of its critically-acclaimed adventure video game, The Last Express. The process was awarded U.S. Patent 6061462: Digital Cartoon and Animation Process. In the mid-1990s, Bob Sabiston, an animator and computer scientist veteran of the MIT Media Lab, developed a computer-assisted "interpolated rotoscoping" process which the director Richard Linklater later employed in the full-length feature films Waking Life (2001) and A Scanner Darkly (2006). Linklater licensed the same proprietary rotoscoping process for the look of both films. Linklater is the first director to use digital rotoscoping to create an entire feature film.
Additionally, a 2005-06 advertising campaign by Charles Schwab uses rotoscoping for a series of television spots, under the tagline "Talk to Chuck." This distinctive look is also the work of Bob Sabiston.
Technique
Rotoscoping is decried by some animation purists but has often been used to good effect. When used as an animator's reference tool, it can be a valuable time-saver.
Rotoscope output can have slight deviations from the true line that differ from frame to frame, which when animated cause the animated line to shake unnaturally, or "boil". Avoiding boiling requires considerable skill in the person performing the tracing, though causing the "boil" intentionally is a stylistic technique sometimes used to emphasize the surreal quality of rotoscoping, as in the music video Take on Me. Rotoscoping has often been used as a tool for special effects in live-action movies. By tracing an object, a silhouette (called a matte) can be created that can be used to create an empty space in a background scene. This allows the object to be placed in the scene. However, this technique has been largely superseded by bluescreen techniques. Rotoscoping has also been used to allow a special visual effect (such as a glow, for example) to be guided by the matte or rotoscoped line. One classic use of traditional rotoscoping was in the original three Star Wars films, where it was used to create the glowing lightsaber effect, by creating a matte based on sticks held by the actors. The term "rotoscoping" (typically abbreviated as "roto") is now generally used for the corresponding all-digital process of tracing outlines over digital film images to produce digital mattes. This technique is still in wide use for special cases where techniques such as bluescreen will not pull an accurate enough matte. Rotoscoping in the digital domain is often aided by motion tracking and onion-skinning software. Rotoscoping is often used in the preparation of garbage mattes for other matte-pulling processes.
Examples of rotoscoping
in animated films:
Snow White and the Seven Dwarfs
The 1940s Superman cartoons
A Scanner Darkly
Waking Life
Lord of the Rings
in live action films:
A Fistful of Dollars (title sequence)
Star Wars (lightsaber effects)
Forrest Gump (television sequences)
Godzilla showa and heisei series(rays)
in video games:
Prince of Persia (Among the first application in video gaming)
Another World
Commander Blood
Flashback: The Quest for Identity
The Last Express
Hotel Dusk: Room 215
in music videos:
Take on Me by a-ha
Money for Nothing by Dire Straits
Money for Nothing - Beverly Hillbillies by Weird Al Yankovic
Breaking the Habit by Linkin Park
in television shows:
Delta State
Flash Gordon (Filmation 1978)
American Idol: Idol Gives Back live duet between Celine Dion and Elvis Presley (together on stage with use of rotoscoping)
in commercials
Charles Schwab commercials
Apple iPod commercials
Chroma key (Blue Screen)
Chroma key is a technique for blending two images, in which a color (or a small color range) from one image is removed (or made
transparent), revealing another image behind it. This technique is also referred to as color keying, colour-separation overlay (CSO; primarily by the BBC), greenscreen, and bluescreen. It is commonly used for weather forecast broadcasts, wherein the presenter appears to be standing in front of a large map, but in the studio it is actually a large blue or green background.
History
Prior to the introduction of digital compositing, the process was complex and time consuming known as "traveling matte". The blue screen and and traveling matte method were developed in the 1930s and were used to create special effects for "The Thief of Baghdad," a film produced by Alex Korda. Director of Special Effects was Larry Butler. Mr. Butler's father, William Butler, appeared in more than 260 silent films beginning his career in 1908 at the age of 48, one of the pioneers of early filmmaking. The first Academy Award for best picture was awarded in 1928 for a film called "Wings," a movie about World War I flying aces. William Butler was 68 at the time; he had become interested in special effects. He asked his son, Larry, to drop out of Burbank High School in order to help him work on his contract to do special effects for the sequel to "Wings" called "Dirigible" which appeared in theaters in 1931.
The credit for development of the blue screen is given to Larry Butler, nominated for five academy awards, winner of two, and longtime executive at Columbia pictures. An inventor and widely believed to be a mechanical genuis, Larry Butler won the Academy Award for Special Effects for the Thief of Baghdad in 1940. He had invented the blue screen and traveling matte technique in order to achieve the visual effects which were unprecedented in 1940. He was also the first special effects man to have created these effects in Technicolor, which was in its infancy at the time. In 1950, Warner Bros. employee and ex-Kodak researcher Arthur Widmer began working on an ultra violet traveling matte process. He also began developing bluescreen techniques: one of the first films to use them was the 1958 adaptation of the Ernest Hemingway novella, The Old Man and the Sea, starring Spencer Tracy. The background footage was shot first and the actor or model was filmed against a bluescreen carrying out their actions. To simply place the foreground shot over the background shot would create a ghostly image over a blue-tinged background. The actor or model must be separated from the background and placed into a specially-made "hole" in the background footage. The bluescreen shot was first
rephotographed through a blue filter so that only the background is exposed. A special film is used that creates a black and white negative image — a black background with a subject-shaped hole in the middle. This is called a 'female matte'. The bluescreen shot was then rephotographed again, this time through a red and green filter so that only the foreground image was cast on film, creating a black silhouette on an unexposed (clear) background. This is called a 'male matte'. The background image is then rephotographed through the male matte, and the shot rephotographed through the female matte. An optical printer with two projectors, a film camera and a 'beam splitter' combines the images together one frame at a time. This part of the process must be very carefully controlled to ensure the absence of 'black lines'. During the 1980s, minicomputers were used to control the optical printer. For The Empire Strikes Back, Richard Edlund created a 'quad optical printer' that accelerated the process considerably and saved money. He received a special Academy Award for his innovation. One drawback to the traditional traveling matte is that the cameras shooting the images to be composited can't be easily synchronized. For decades, such matte shots had to be done "locked-down" so that neither the matted subject nor the background would move at all. Later, computer-timed motion control cameras alleviated this problem, as both the foreground and background could be filmed with the exact same camera moves. Petro Vlahos was awarded an Academy Award for his development of these techniques. His technique exploits the fact that most objects in real-world scenes have a color whose blue color component is similar in intensity to their green color component. Zbig Rybczynski also contributed to bluescreen technology. Some films make heavy use of chroma key to add backgrounds that are constructed entirely using computer-generated imagery (CGI). Performances from different takes can even be composited together, which allows actors to be filmed separately and then placed together in the same scene. Chroma key allows performers to appear to be in any location without even leaving the studio.
Computer development also made it easier to incorporate motion into composited shots, even when using handheld cameras. Reference-points can now be placed onto the colored background (usually as a painted grid, X's marked with tape, or equally spaced tennis balls attached to the wall). In post-production, a computer can use the references to adjust the position of the background, making it match the movement of the foreground perfectly. In the past decade, the use of green has become dominant in film special effects. The main reason for this is that green not only has a higher luminance value than blue but also in early digital formats the green channel was sampled twice as often as the blue, making it easier to work with. The choice of color is up to the effects artists and the needs of the specific shot. Red is usually avoided due to its prevalence in normal human skin pigments, but can be often used for objects and scenes which do not involve people. Weathermen often use a field monitor to the side of the screen to see where they are putting their hands. A newer technique is to project a faint image onto the screen.
The process
The principal subject is filmed or photographed against a background consisting of a single color or a relatively narrow range of colors, usually blue or green because these colors are considered to be the furthest away from skin tone. The portions of the video which match the preselected color are replaced by the alternate background video. This process is commonly known as "keying", "keying out" or simply a "key".
In analog color TV, color is represented by the phase of the chroma subcarrier relative to a reference oscillator. Chroma key is achieved by comparing the phase of the video to the phase corresponding to the preselected color. In-phase portions of the video are replaced by the alternate background video.
In digital color TV, color is represented by a triple of numbers (red, green, blue). Chroma key is achieved by a simple numerical comparison between the video and the preselected color. If the color at a particular point on the screen matches (either exactly, or in a range), then the video at that point is replaced by the alternate background video.
Clothing
A chroma key subject must not wear clothing similar in color to the chroma key color(s) (unless intentional), because the clothing may be replaced with the background video. An example of intentional use of this is when an actor wears a blue covering over a part of their body to make it invisible in the final shot. This technique is used in the Harry Potter films, to make Harry's cloak appear to be invisible.
Background
Blue is generally used for both weather maps and special effects because it is complementary to human skin tone. However, in many instances, green has become the favored color because digital cameras retain more detail in the green channel and it requires less light than blue. Although green and blue are the most common, any color can be used. Occasionally, a magenta background is used. With better imaging and hardware, many companies are avoiding the confusion often experienced by weather presenters, who must otherwise watch themselves on a monitor to see the image shown behind them, by lightly projecting a copy of the background image onto the blue/green screen. This allows the presenter to accurately point and look at the map without referring to monitors.
Even lighting
The most difficult part of setting up a bluescreen or greenscreen is even lighting and the avoidance of shadow, because it is ideal to have as narrow a color range as possible being replaced. A shadow would present itself as a darker color to the camera and might not register for replacement. This can sometimes be seen in low-budget or live broadcasts where the errors cannot be manually repaired. The material being used affects the quality and ease of having it evenly lit. Materials which reflect light will be far less successful than those that do not. A plastic sheet will reflect light and have a hotspot in the center which will come out as a pale area, while the edges may be darkened. A cotton sheet will absorb more light and have a more even color range. Recently a much simpler and easier way to create an evenly lit background has been developed. By using screens made from a retroreflective fabric illuminated by a ring of LEDs around the camera lens it is possible to produce very even bright blue or green backgrounds whilst only consuming around five watts. Products such as Reflecmedia's Chromatte and LiteRing systems enable chroma key backgrounds to be created very simply and quickly, freeing the user to concentrate on lighting the foreground creatively. The systems are extremely energy efficient and enable users to create virtual studios in areas where space and energy are at a premium.
Srivenkat Bulemoni