SIGGRAPH 2009 Report

Obviously SIGGRAPH was a while ago, but I’ve been out of town working on a project that I will reveal shortly. Firstly, I’d like to report that my talk went pretty well. I was blessed to be part of a fantastic session and lots of neat material got covered. In fact, my stuff seemed somewhat redundant, because Adolph Lusinky’s presentation on
Bolt
described a similar aim and superior end result. At the end of the day though, it was probably good to present both methods; the studio way and the lone animator way.

Of course, there were many other great talks and papers presented over the week. Some of my favorites were the Pixar team explaining new feather and fur techniques for Up, more Bolt discussion on procedural animation and real-time collisions for the hamster ball, and Digital Domain’s breakdowns of shots from Benjamin Button. The paper that I found most interesting was submitted by the University of Wisconsin-Madison and deals with a new approach to video stabilization.

However, I spent most of my time in the Exhibition hall. SIGGRAPH 2009 was a much smaller affair than usual, with somewhere between 16,000 and 18,000 attendees total, and there were far fewer exhibitors as well. For example, Disney, Newtek, Adobe, Weta and many others chose to represent themselves without booths. Apparently this is partly a result of the current economy, and partly a natural occurrence whenever SIGGRAPH is held anywhere but L.A. (where last year’s attendance was 28,000).

There were still a lot of great vendors, though. The busiest booth was probably Pixar’s, which was a full-sized version of the Up house, complete with Carl & Ellie mailbox and picket fence. They hosted many Renderman-based talks there, and made many new Renderman-based announcements, including a never-before-seen educational pricing scheme.

There were a number of new product announcements, such as Spheron’s new digital cinema camera. Spheron is best known for their 360° HDRI cameras which are usually used for shooting spherical light probe images, but their functional HDRV prototype shoots either 32 or 48-bit images off of a 35mm non-CMOS sensor at up to 30fps. Not much else is known at this point, since only the camera (with PL mounted Arri prime) was on display; its tethered control panel was not. Until we learn more, there’s not much else to say, but recording 20 stops of latitude in a single file is pretty amazing.

That was the only video camera. Video displays, however, were everywhere. There were more HD screens and projectors at this event than I’d seen in my entire life up until this point. The video wall above is showing 1,250 individual video files, all of which are streaming from a single solid-state hard drive made by Fusion-IO. It can handle an incredible amount of bandwidth. Spheron camera techs take note.

Other video walls demonstrated everything from GPU acceleration to data presentation. I got play with Google Earth at about 6k resolution, and mess around with Maya at 4k in full 3D. Stereographic displays were everywhere, with at least one in every other booth. In my opinion, the best was made by Planar3D. All it requires is two off-the-shelf screens and a one-way mirror, and it offers flicker-free 3D viewing for as many people as are wearing inexpensive polarized lenses.

Other options on display were lenticular screens, which showed a glasses-free image with limited depth and limited viewing angles, and a lot of 120hz LCD screens that had great 3D when paired with nVidia’s shutter glasses, which require recharging and a sync signal and cost $150 per pair.

The Hungary-developed Leonardo 3D modeling platform used shutter glasses combined with a motion-tracked stylus called a “Bird” to manipulate and sculpt objects in 3D space. It was really amazing to use. Within about 4 minutes, far less time than it took me to figure out ZBrush, I had a very recognizable hippopotamus head built from scratch.

To be honest, I have no idea how Leonardo would really work within any existing production pipeline since it only handles polygons, and it constantly re-tessellates the mesh as the object is altered… but it was just ridiculously fun to use. It was such a neat experience that I was actually grabbing passing strangers and insisting that they try sculpting with the Bird so that I could watch their faces as they picked it up. This video doesn’t do it justice, but it’s as close to communicating the experience as is possible. It shows almost exactly what the sculptor sees.

Interestingly, the Bird doesn’t use gyros or accelerometers or ultrasound to compute its position and rotation. The three sensors you see on top of the computer monitor are little IR cameras, and they track the white dots on the spines of the Bird in real time. They also track the location of the shutter glasses so you can peer around the object simply by moving your head.

There were a number of vendors with motion capture solutions on sale, and those using camera-based optical tracking have become far more affordable recently. High-speed cameras and LED illuminators have dropped in price, and even laptop computers now have the processing power to interpret spatial data from multiple cameras.

A really interesting booth was being run by Motion4U, who have a product that uses monitor-mounted cameras like Leonardo’s to put a miniature motion capture stage onto an animator’s desktop. The animator can then use handheld markers to puppeteer objects and characters around in real-time without the hassle and expense of a full-sized mo-cap stage and suited performers. It works with Maya now and will be supporting Lightwave soon.

Another technology that has come down in price lately is stereolithography. There were a number of 3D printers being demonstrated, and for a mere $15k I could have taken one of Dimension’s finest home with me to build any 3D model out of solid ABS plastic. All of the different developers had different approaches and materials, like ObJet, whose machines can actually mix different compounds in the nozzle, so hard plastics and soft rubbers can be combined in a single model.

Unfortunately, I’d don’t think I do quite enough fabrication to justify buying my own machine, so any 3D printing that I do need done I’ll probably send to Shapeways.com, who were demonstrating some of their new materials, like stainless steel. That’s right, you can have objects printed in plastics, nylon, and polished steel. I’m tempted to build a follow focus system for the 5DmkII with them. It would probably be cheaper than most existing products.

So, all in all, it was a fascinating week. Even though attendance was lower than an L.A. conference, and there were a number of companies that I thought were under-represented, there was still a lot to see and do. In some ways, it was easier to find people and I think that organizers and staff had more time to talk to me than if they’d had an additional 10,000 people there.

And most of the regulars that I talked to mentioned that they felt things were a little slow that year, both at the conference and in the industry. Obviously economic concerns played a lot into that, but almost everyone suggested that the Autodesk Monopoly was also to blame for a lack of competitive spirit. Even though this was my first SIGGRAPH, I definitely noticed a sense of borderline apathy everywhere.

Everywhere but the job fair, that is. Recruiters and studio booths were swamped by attendees, mostly recent graduates, who were desperate for employment. I actually got a few job offers myself, which surprised me because I was just walking by. Even though I have no formal education, a number of recruiters were very curious about my experience. I guess it doesn’t surprise me that experience trumps credentials, but I wasn’t there looking for work.

They main reasons I attended was to get a better feel for the state of the industry, to get caught up with tools and techniques that I really haven’t had time to study, and to bounce some ideas off of different people. To that extent, SIGGRAPH 2009 was a huge success.

  1. The lecture that was in regards to BOLT titled “Applying Painterly Concepts in a CG Film”, was that similar to yours in regards to creating a painted look with CG?

  2. Yes. Their effect is probably the best simulation of contour-driven brush-strokes that I have ever seen, but they went very subtle with it. If you look at this frame you can see that the painterly effect is really only visible on the trees and any objects in the far background. It removes a lot of high-frequency detail that would distract from foreground elements.

  3. There was one shot that I thought was a matte painting because I could very clearly see the brush strokes. It was a wide shot of the gas station at night. After you mentioned the painting effect in Bolt I’m thinking that they could have used it on that shot.

    Oh, and a random thing I thought of, why haven’t you ever mentioned VideoCopilot.net on your blog?

  4. I take that back…you mentioned him on your matte painting post. :-P
    Still haven’t written a blog post specifically about VCP.

Comments are closed.