Due to the electrical constraints that existed during the invention of video, television has relied on interlaced fields since before 1930. Due to the chemical developments that guided the invention of photo-projection, film hasn’t. Due to the technological advances of recent years, computer displays are completely different. Up until a few years ago, using three (or more) totally separate display systems was fine, but today we’re trying to merge those all together so we can watch movies on our cellphones and check email on our televisions.
For now, I’m going to ignore LCD panels, plasma screens, and DLP projectors. How a CRTs actually displays video is complicated to explain but reasonably simple: NTSC video is 30 frames per second (technically, only 29.976fps), and usually 480 lines or pixels high. However, each one of these frames contains two fields which means the video is in effect 60 fps, and only 240 lines high. Played at speed we get a very fluid moving image that resembles a full resolution picture. Unfortunately, the full frames look something like this:
This is not a problem unless the video is being shown on a non-interlaced display, such as a film projector. Film is shot and projected at 24 fps, and each frame is a full-resolution exposure just like any other. When movies are broadcasted on television, they must be converted to the new framerate by doubling up certain frames. Every 1/6th of a second, four film frames must be turned into five video frames or ten video fields, spaced as evenly as possible. The best way to do this is called a 3:2 pull-down.
As you can see, it spreads the frames across the fields 3-2-3-2 at a time. It’s not as smooth and even as true 24fps playback, but it’s the least stuttery option for NTSC video. It’s also a readily available option; nearly every video camera now has some sort of 24fps shooting mode. This has led to plenty of confusion, firstly in how we label different formats, how many different formats there actually are (60i, 59.94, 50i, 30p, 29.97, 25p, 24, 23.97, etc, etc…), and when we use each one.
Too many filmmakers think that the 24p mode on their new camcorder will magically make everything look more better, so regardless of what they are shooting or how they plan to edit, they shoot 24fps. Yes, film looks better than video, but the strobing flicker of 24p video that has been improperly converted to NTSC (or played natively on the wrong hardware) looks nothing like the smooth but slightly slower rate of real film. In my opinion, 24p should be reserved for productions that will be printed to film, made into a 23.97fps DVD, or matched to real telecine’d film.
For projects shot on video and displayed on video cameras, a true video format can give better results; particularly if your project is a documentary, industrial video, news report, or anything that is traditionally in a broadcast format. Furthermore, unless your camera has a true progressive CCD array, you might get a better image (in addition to more flexibility) by shooting an interlaced image, and deinterlacing in post. RE:Vision’s FieldsKit is the best tool for pull-downs, pull-ups, and any other field management processes.
Of course, this process is greatly simplified if you happen to be using PAL gear. You can shoot at 50i, ready for non-American broadcast or deinterlaced playback at 25p. Going from 25fps to 24fps is a slowdown of about 4%, which means that a 90 minute feature on PAL would only be about 3 minutes longer on film. Since nearly all NLEs and video tools support PAL, and proprietary 24p support can still be a little sketchy, it is a simpler option. Carefully reasearch all the options available for specific projects.