The final touch to the project, which pulls everything together and does more to communicate the mood of the scene is the musical score. The music needs to match the scene, both visually and emotionally, and also fit in with the sound effects and dialog. Our score was (unfortunately) created without any real instruments whatsoever, and composed, performed, and mixed by three people on a single computer running Cubase.
Rather than recording the analog audio feed from our electric piano, the computer recorded the MIDI signals created by each keypress. This enabled us to go back into a performance and adjust each note’s position, length, velocity, and expression, or move them up and down across the scale, or change their tempo. This is particularly important for adjusting each track to better match the instrument that is playing it.
click to enlarge
Each of these performances sits in a track, and can also be manipulated on the timeline, as well as copied and pasted, extended, shortened, and affected by filters without destroying the MIDI data. At any time, any note can still be changed. We now have all the parts for the orchestra laid out and assigned, and it’s time to attach instrument sounds to them.
So far, I’ve only talked about creating the images for our film, but sound and music are extremely important, and can be quite time consuming. One of the reasons that the animatic was so important for our tight-deadline project was that it gave us something to write music to before any animation was done. As as I’d finished the animation edit, we could start adding the sound effects. I’ve probably posted enough charts over the past month, but here’s one more:
That is the timeline for the whole project. Ideally, the scripting period should have been longer, but you can see that by creating an animatic at the beginning of the project, the composers were able to use the entire month. To a lesser extent, so was the sound effects department. The storyboards showed the action pretty well, and from that we could see what sounds we would need; swords clashing, cannons firing, and creaking ships. Most of these sounds we found on the internet, but many we recorded ourselves. All of them needed to be adjusted and sweetened so that they could work together.
We used the many powerful filters of Adobe Audition to turn fuzzy, static-filled, mono internet clips into cleaner, more expansive stereo tracks. Our own recorded sound effects needed tweaking as well, and we added ambience, subtle reverb, and stereo offsets to simulate a more “open sea” type of environment. We built this library out until the first edit was finished, and then were able to begin matching audio cues to visual actions and create a more realistic mix. This is extremely important for animated projects, since there isn’t any live sound captured on set.
As of today, www.isaacbotkin.com is twelve months old. As of this post, there are 64 entries in the archives, which means that I’ve somehow managed to average more than one post per week… even if they haven’t always turned up regularly.
I’d like to thank all the readers who frequent this site, and everyone who has emailed in questions, comments, and news reports. I have greatly appreciated your encouragement and participation, and am looking forward to another year of posting about projects and developments in film and video.
The final stage of creating the visuals is compositing. That involves combining the backgrounds and foreground animation together, and adding any special effects and color correction. This is easier with 3D animation than live action because animated elements can be rendered separate from a background, and combined according to depth. This is helpful because atmospheric effects are important.
In addition to backgrounds and character animation, I created some moving mist to make up the smoke of battle, some smoky wisps for closer shots, and the smoke from the cannons. We also need several shots of debris, for when the cannonfire hits the sides of the ships, and several different splashes for cannonballs that miss and men overboard. To add life to the island, we also needed seagulls.
Above is Shot 32. You can look at the rendered image, which is what Lightwave exports. It also exports a depth map, what Lightwave calls the z-buffer. The z-buffer displays how far away objects are from the camera. You can see that the captain, the object closest to the camera, is black, the pirate ship is grey, and the sky is white. Then we render out the background image, which fits together with the foreground.
Well, once the animation of the characters, ships, and foreground images got taken care of, it was time to create the backgrounds. First off, we needed an ocean. With square plastic blocks there are only a few ways to make an ocean, but in 3D I had the advantage of being able to control infinite plastic blocks.
This clip is my first 3D water test. I have 3D blocks moving up and down, controlled by an animated displacement map. If they get high enough, the top of the block turns white, so the peaks of my waves have white crests. Once I figured out how to do this, it was simpler than it looks. There were some downsides, though. While I can in theory control infinite blocks, my tight deadline meant that I probably wouldn’t have the time to render more than about 40,000 per scene.
That’s a lot of blocks. Fortunately, the storyboard (coming in handy once again) showed me exactly how many scenes needed close-up shots of the 3D water. There were actually not that many. In most shots, the ocean is much further away and I could cheat it with some simple 2D water. For that I created these two textures, which tile and loop, and mapped them to a single, flat, ocean polygon. The grey image is the bump map, and the blue one is color. Here is what it looks like animated. Not as nice as the 3D water, but fine for long shots.
Of course, we also need a sky. On the open sea the sky is a large part of the environment, and it would also dictate the lighting for the shots. As much as possible, I wanted to visual style of this short to mimic the great swash-buckling illustrations by Howard Pyle and N. C. Wyeth, who always had strong warm light, long cool shadows, and high puffy clouds. Of course, they also had a lot more swash and buckle than I can squeeze out of rigid little plastic men, but at least I can do puffy clouds.
With all the foreground elements created, it was time to begin animation. Now I’m not going to go into too much detail of the nuts and bolts of 3D animation since there are many wonderful animation resources on the internet already, and I’m trying to teach more about the production process we used than the nitty gritty technical stuff.
click to enlarge
However, it’s similar to stop motion and even live action in that you have a 3-dimensional stage to lay out your scene in. I can load the ship into Lightwave, put the sailors on the deck, and position lights and camera just like real life. However, there is a bit more control in the computer, since I can scrub back and forth through my frames as I animate to check my progress – something that’s tricky to do with claymation.
Three days into the project, it was time to begin modeling. As soon as we had a storyboard finished, it was easy to see exactly what we needed to build: about twenty men, two ships, and one island. I started out using my favorite modeling program, Lightwave 3D.
Fortunately, the characters are very simple and were built almost entirely out of primitives; the torso is a box stretched into a trapezoid, the neck is a cylinder, and the head is two spheres and a cylinder, topped with another cylinder. As in real life, all the details are simply flat texture maps. Also, our subjects have a shiny plastic finish, making them very well suited to 3D animation. I decided to stick with the rigid joints of the figures, and so we skipped an internal skeleton for deformation, and went with a simple joint hierarchy.
The only complex part of the texturing process was the faces. We decided early on to have animated features and lip-syncing, so all the facial image maps were separated into three layers; mouth, eyes, and eyebrows. In this animated gif you can see that by switching from one eye or eyebrow image to another, it is easy to animate simple expressions.
The mouths were a little more complicated because more images were needed. The mouth shapes, called phonemes, need to match phonetic sounds of the voice track to convincingly match the dialog. In addition to the smile and frown shapes, we had M-P-B, E, A-I, O, and U. Because our characters’ faces are so simple, we had to skip L and F-V; both of those phonemes require teeth. We also added a couple of extra “emotive” mouth shapes as well… and as many people have pointed out to me, “Arr” is a vital pirate phoneme.