VFXTalk Interviews CafeFX on Spiderman 3!

Spiderman 3 is coming hot out of the theaters and VFXTalk is pleased to announce this awesome interview with the Visual Effects Wizards at CafeFx who helped to make the amazing visual effects a reality! CafeFX created the vertigo-inducing crane disaster sequence for SPIDER-MAN?Ñ¢ 3, setting the stage for a classic Spidey rescue. The 46-shot sequence, along with 35 additional shots, was awarded to CafeFX by Sony Pictures Imageworks, lead effects facility for SPIDER-MAN 3, the latest in the multimillion-dollar franchise.

Cafe Fx
CafeFX is an award-winning feature film visual effects facility offering visual effects production and supervision, CG character creation, and 3D animation. Founded in 1993 by Jeff Barnes and David Ebner, CafeFX is located in a 36,000-square-foot studio on an eight-acre campus in the heart of Santa Barbara County. The company credits include Spider-Man?‚àö√´¬¨¬¢ 3, Ghost Rider, Pan’s Labyrinth, The Departed, Eragon, Sin city, King Kong, Memoirs Of A Geisha and The Aviator.
www.cafefx.com

Spiderman 3
Spiderman 3 is the third installation in the highly successful Spiderman series, and is a visual effects extravaganza! As Peter gets to grips with his new-found personal life with Mary Jane he meets a powerfull shape-shifting villan known as the ‘Sandman.’ At the same time a strange black substance bonds with his Spidersuit, giving it new powers and at the same time causing inner turmoil as he contends with new villains, temptations and revenge.

Official Website: http://spiderman3.sonypictures.com/
Theatrical Trailer: http://www.ifilm.com/presents/spiderman3/

The Crane Scene
The scene opens as a steel beam, suspended from an out-of-control construction crane, spins toward a glass-encased skyscraper. From her photo shoot inside, Gwen Stacy (Bryce Dallas Howard) reacts to the impending disaster and the audience sees her dawning horror in the reflection of the windows. She dives for cover as the beam slices through the space, shattering windows and shearing off support columns. The off-balance crane then swings in a wild arc and takes out the floor below, causing the floor that Gwen is on to collapse and tilt at a perilous angle. CafeFX integrated hundreds of animated CG elements with live action cinematography, models and miniatures, digital doubles and photographic backgrounds of New York in the hybrid production of this signature sequence, which is also seen from multiple angles and triple takes.

Backgrounds and feature effects
Among those were backgrounds for the climactic final battle between Spider-Man and Sandman and the addition of a matte painting of the city square for the key to the city sequence. CafeFX also used Massive software to populate the large crowd that has gathered for the ceremony. Other shots crafted by CafeFX included the rivets that burst from a subway water tank; burning butter and beaten eggs in a skillet; a foggy field; eye shield extensions for the villain Venom; and tears in Sandman eyes to enhance emotion.

In this interview we speak with Scott Gordon, visual effects supervisor at CafeFX, VFX producer Richard Ivan Mann, CG supervisor Akira Orikasa, lead FX TD Rif Dagher and Edwardo Mendez, Compositing Supervisor. The company production pipeline is configured with Autodesk Maya, cebas Thinking Particles, Sitrisati Fume FX, eyeon Digital Fusion, Autodesk Combustion, Massive, Autodesk Mental Ray, cebas finalRender Stage-2, 2d3 Boujou, Adobe After Effects and Apple Shake

The team at CafeFX must have been very excited to work on Spiderman 3, what was the mood/vibe like at the facility before you started?

Scott: We were very excited. We have a lot of genuine Spider-Man fans amongst our crew. The first two Spider-Man films were so well received within the visual effects community that an opportunity to contribute to the third, for many of us, was a dream come true. We always want to do exceptional work, even when the project is limited by time and/or budget, and here we would have an opportunity to truly show what we are capable of.

How many shots were required for completion of your work on the film, and how long did the entire project take?

Scott: There were 81 shots, with about half in the Crane Disaster sequence. We began R&D and asset construction in late July 2006, and completed work at the end of March 2007.

How large is your visual effects team and how is it divided? Do the vfx artists do the compositing work as well?

Scott: The number fluctuates, but CafeFX typically employs about a hundred artists and has the capacity for twice that amount. We usually have several projects going in-house simultaneously, which gives us tremendous flexibility with staffing. For Spider-Man 3 the team consisted of about 40 people total, although the average amount on the project at any given time was closer to 25. For Spider-Man 3 all of the compositing was done by the compositors under Ed Mendez supervision, but here at CafeFX it is not unheard of 3D artists to composite their own shots. We have several ‘generalists’ and we really value artists who can bring more than one skill to the table.

What was the ‘pre production to final stage’ planning process you used to come up with the vfx shots for the film? What sort of freedom are you given in creating the looks for the sequences you are in charge of?

Scott: For Spider-Man 3 we were given an animatic which had most of the shots previsualized in some form. That was a solid template to start with, and while some shots were added and others were omitted, for the most part the look and feel of it never changed. Regarding creative freedom, my experience on this show, as on most shows, is usually to have complete freedom, but at the end of the day we have to meet the goals of our clients, in this case visual effects supervisor Scott Stokdyk and director Sam Raimi. We always began by simulating ‘reality,’ and then took license where needed to make things more interesting or exciting, always keeping in mind the storytelling purpose of the shot. Our clients guided us along the way with their requests, like ‘make the debris fly right at camera’ or ‘minimize the smoke,’ and so on. It was always collaborative, and I think that we delivered shots that exceeded their expectations; when we didn’st, they told us why and we addressed it.

The Crane Sequence

Scott Gordon, visual effects supervisor for CafeFX, said ‘The crane disaster sequence challenged us on all levels. In order for the action to work, it had to play out against the ultimate choreography, integration and interaction of countless practical and CG elements.’

The Crane Sequence was awesome! Was it the biggest challenge in term of visual effects in the film? How did you get it done, was there a different treatment or new technique that you used?

Scott: The Crane Disaster was by far our most difficult sequence, and the hardest shots within that were the ones that utilized a 1/6-scale miniature of the crane tip ripping through the side of a glassless building with full-scale shots of the building exterior or interior set with actors reacting to having the floor fall out from under them. To that we added our CG elements: the surrounding buildings, breaking glass, building debris, falling furniture, office supplies, papers, dust and smoke. The miniature was particularly difficult to deal with because it needed to be shot at (high) scale speed and the motion-control rigs couldn’st reliably achieve those speeds.

The exterior shots had already been photographed at high speed with a slow-moving camera since frames could easily be dropped. In addition to the timing discrepancies there were physical differences between the (real) building exterior, the full-scale interior set and the 1/6-scale miniature that made combining them difficult. Our solution to these issues was to re-time and re-project everything onto a fresh scene which contained the desired camera move. Then it became a straightforward process to clean up the discrepancies and add all of our CG elements.

How many people worked on the shot of Gwen Stacy running from the crane, and what was most challenging aspect of this shot?

Scott: One artist built and texured the crane. Another animated and lit it for the scene. A third created the building environment, and fourth tracked the camera that they all used. A compositor and a roto/paint artist rounded out the crew for that specific shot. But there was also a big team supporting those artists: the supervisors (VFX, CG, Comp), a producer, coordinator, PA, VFX editor, render-wranglers, IT dept., etc.

In the crane sequence, were you asked to retime many plates and determine the timing of shots in post for more dramatic purposes, or was the director comfortable hitting those marks in camera?

Scott: For the most part those marks were hit in camera. In the miniature shots and in some of the shots at the end of our sequence where Gwen is falling, the cameras were overcranked to allow for some creative wiggle-room. A lot of time was spent on pre-viz though, which helped.

What tools and workflows were used to composite the actors into the scene, after the crane took the floor out and the entire office was hanging out of the building? were you forced to use a mixture of traditional techniques such as miniatures and sideways sets or was it all 3d and compositing?

ED: There were a series of shots for this part of the sequence, and they all used similar techniques. The interior sets with the actors were obviously full scale, but we also shot a 1/6 scale of the building being destroyed. We would take these plates, matchmove and in one case, project them onto each other. We would then add our CG buildings, glass, debris, desks, papers, and explosions, all on top of the plates. All of the actors were live action with the exception of a digital Gwen supplied by Sony on the shot where the crane cuts into the building vertically. The miniatures and actors being live-action meant we had a lot of bluescreen removal and rotoscoping to do. All this was done in Fusion. There we also a bunch of wire removals as well, which were painted out in Combustion

Gordon observed, ‘We are seeing a greater trend toward the use of visual effects to heighten a dramatic moment and to provide a greater range of editorial choices.’

We were fascinated by the explosions and debris and the ripped metal pieces when the crane tears through the building – Either the outdoor one, the indoor one and the shot where we see the ripped pieces from a low angle. Which is miniature work, cg, and live action? How was this done?

Scott: The actors were shot on a stage that was rigged to have the floor drop, and in the shots where you see them up close everything that was not shot on that stage is CG. The building exteriors are a combination of live-action (Spyder-Cam in New York) building exteriors and CG. The miniature had no glass since the building is so reflective and it would have been impossible to make those reflections realistic, so it appears in only a few shots where we see the crane tip physically ripping through columns or floors. Even in those shots, most of the destruction you see is CG.

For the Crane raking up the side of the building destruction shot : Did you create a procedural rig which “broke” the building structurally and window panes based on a voronoi pattern, based on the crane motion ? (much like the recent siggraph paper by Pixar, how they did the “road” break-up in the small town)…. or did you just cover “in-comp” the crane and building connection with layers of particle glass, debris, and dust ?

RIF: For the crane destruction sequence, the method was pretty simple, but required some mesh preparation. Pre-breaking cement beams and glass panels and then re-attaching them as one object per breaking mesh was the delicate part. Once we had some clever methods to randomize the tessellations and the volume pre-breaking, it was then easy to use variances of the same mesh for building walls, ground cement, debris and glass panels. Based on some shape collision detection rules, we then broke those pieces and spread the impact based on pressure transfer of the material through the “destruction path” for creating the desired dynamic.

When your team is faced with the challenge of creating their effects what are the typical work patterns you follow? Is the final result always as you originally envisioned it, or does the process often change and adapt as new ideas or challenges arise?

Scott: There are typical work patterns, but one of the great things about this business is that no two shots are alike. There always something new or something unexpected. Our basic methodology is fairly straightforward: We build assets and perform R&D early on. When plates are ready, our matchmovers create camera-tracks and also matchmove anything that will be needed for physical interaction, shadows or reflections.

Meanwhile, the paint and roto artists perform any necessary rig removals, cleanup or roto. Next we’sre animating characters and/or effects, lighting and rendering them (usually in layers to give more control to the compositors) and compositing. The cycle of animating-lighting-rendering-compositing gets repeated until we run out of time or ideas. But as mentioned earlier, every shot is different, so within the details of that main process we do whatever we need to in order to create the best results in the most efficient way possible. We are ALWAYS open to great ideas, and they can come from anywhere.

Did you have any shots you had to redo, fx wise cause the modelsshaders were revised? How much were you relied on other departments than fx?

AKIRA: Fortunately, we didn’t have many redos because of models/shaders revision, although coming up with a satisfying look for breaking glass was a continuous trial and error. When the three elements of transparency, reflection and refraction were balanced properly, our glass started look like glass. We relied heavily on the modeling / texturing department for the CG crane. Texture maps and shading were continuously updated while we were lighting and rendering.

Sandman

How did you create the tears for sandman and what did it take to composite them into the scene? did you use 3d compositing or was it all 2d?

ED: These series of shots used a combination of 3D and 2D effects. Our goal was to make Sandman’s eyes appear to be tearing up by adding specular highlights. To achieve this, our team started with a 3D matchmove of Sandman’s head. From that track, we added a couple of spheres in 3D space to represent the eyes.

To replicate the look of real specular highlights, we added multiple hdr spheres with bright specular reflections. Then, with the help of an in-house script, we were able to get the speculars in the correct position on the eyes. This script basically created a locator on the surface and created another locator representing the reflection direction.

Using the second locator, you can see in 3D space where exactly the reflections are coming from. Once reflections were in place we kicked out a render pass of black spheres with reflections. That element was then taken into Fusion and some additional 2D tracking was added to lock it. Color correction was used to set the look to sit in the scene, and to match surrounding shots. Lastly the roto was applied to set the element correctly in the eye and to remove parts we didn’t need or want to see.

There were additional shots where we had to remove tears from Sandman’s face. These were done in 2D with multiple techniques: color correction of the tears, tracking in patches over other tears, and some traditional frame by frame painting. We also had a shot where we needed to replace Sandman’s head with the head from another shot. Both heads were in similar positions, but the body’s actions were different. We needed to track, grid warp and paint fix the head in place. All of these effects were added to get the correct timing and pace of tears welling up in Sandman’s eyes to sell his emotional transformation within the scene.

Perfect Eggs

I would also like to hear more about how you got the burning butter and eggs in the skillet – sometimes the effects you don’st see are the hardest and most difficult to pull off and this is one that i could have never guessed!

ED/TOM WILLIAMSON: Yeah, these shots were cool. In the original scans from Sony, there were eggs in the skillet, but they were not burning. To get the desired look we had Tom, our in house chef , DP, and fellow compositing supervisor burn some eggs for us. We found a matching skillet, and had some breakfast. We shot the burning eggs at approximately the same angles of the shots in the film. We then roto’d out the hands and the non-burning eggs in the pan. We replaced them with our burning eggs and color corrected to taste. To complete the meal, we added some CG smoke rendered out in fume.

How do you decide which technique to use on specific shots? Do you feel you could accomplish an easier, cleaner solution to any specific shot/fx yet didn’t change the technique due to time constraints?

Scott: Experience is the best guide, but it usually comes down to quality vs. cost. If we can shoot something, we usually do. It often easier to manipulate reality than to create something from nothing. Within the digital realm there are many techniques too, and deciding amongst them is again based on experience. At the lowest level, we try to let the artists use the tools and techniques they feel most comfortable with, but with bigger issues we have to take a more global approach. It a collaborative effort though, and the best solution usually rises to the surface.

Tools and Workflow

Did you use your own pipeline or the Sony imageworks pipeline? Can you elaborate a bit on your pipeline?

AKIRA: We used our own pipeline. It was a great experience to work with Imageworks for this show. They were very helpful and provided us models, images, references, anything that would help us to get the job done. There were times we provided them with textures or rendered elements. In that case, we did our best to follow Imageworks’s naming convention or file format so our assets could be adopted into their pipeline smoothly.

I’ve read that for Ghost Rider you accomplished better results by driving most of the shader work in houdini and wrote a plugin for houdini which exported that information to fluid fx in maya. Did you have any solutions like this in Spiderman 3?

RIF: That was Sony approach for their effects on Ghost Rider. CafeFX used a combination of tools including Maya, Thinking Particles and Fume FX. For my shots, I used Thinking Particles driven with Fume Fx operators, positioning particles on meshes and controlling the fuelsmoketemperature amounts and variations with some clever switch groups to propagate the desired mixture to make the smoke rise at the proper moment. I did reuse the same methods for the work on Spider-Man 3 but we added some important new components in the pipeline, such as “on the fly” meshes influencing the fluid solution, which was a really important part of the dynamics of the smoke and debris in our Spider-Man 3 work.

How much time do you have for pre-production on a feature like this and how large is your RND team?

AKIRA: At CafeFX, a few key artists are usually assigned to figure out the techniques and methodologies that are going to be used for the show during our pre-production / R&D period. Pre-production / R&D usually happens between the day we have the job awarded until the day we get plates. For Spider-Man 3, we had about 6 weeks to R&D our techniques for fluid sim, rigid-body dynamics, and cloth sim for the papers.

Could you elaborate on your in-house tools and how they help to make your life easier?

AKIRA: A lot of our tools are designed to do specific things that make some task easier or to make the results better. The tools we use most often though, are the ones that allow us to translate data between software packages. For example, all of our broken glass, small debris and dust were animated using 3dsMax, even though our primary animation package is Maya. Our translation tools would not only convert the data but also check and prepare assets specifically for simulation.

CafeSync is a tool that we developed to interactively share movie files between any remote location in the world and CafeFX. We usually have phone conference calls with clients or remote artists with CafeSync. You can toggle through the movie in real time, draw marks on still images and so on. This tool allows us to communicate with clients and artists as if they are gathered around the same table.

You mention that Fusion is your main compositing application. Which parts or scenes did you use Fusion in and how was it used ?

ED: Fusion was our main compositing package on Spider-Man 3. We used Fusion to pull blue screens with the Primatte plugin, tracked plates, import 3D cameras, added backgrounds, and paint fixed plates and elements. We used it to integrate our CG and live action plates together. Fusion has an extensive set of color correction and layering tools that allow us to integrate all of the CG elements seamlessly.

To give you a better idea of how we pushed Fusion, CafeFX Compositor Robin Graham explains how he used Fusion on one of our larger shots (where the crane arm vertically slices up the building).

“Fusion was used as the main compositing program as well as a tool to augment animated textures for the CG cityscape. The ground of the city was actually an animated 8k texture. Unwanted items were cloned out and the traffic was keyframed inside Fusion to show moving taxis and driving buses. The main building also had an augmented, animating 8k texture that was warped in Fusion because the original baked out texture was sliding slowly over the surface of the model. Fusion’s grid-warp tool was used to warp each window over time to prevent slipping and position the windows precisely over certain polygons in the model.”

You also have access to Combustion, AE, and Shake. Did you use all or only some in this film?

ED: CafeFX has several tools at hand for the artists. In production you will find certain packages will give you an effect or look others can’t or would take too long. In general, we used Combustion for paint, AE was used for its vast plugins and grain match. Shake we used for its tracking and smooth move plate stabilization. Fusion was our main compositing package where we put the shots all together.

How do all of those packages fit into your pipeline? For instance, is Combustion used more as a paint package and Shake or Fusion used as the main compositor?

ED: Yeah you got it, Combustion is our main paint package, but some artists paint in Fusion. Shake and AE we mostly just used for their plug-ins, and Fusion is our main compositing package.

Which parts or scenes did you use After Effects in and how was it used ??

ED: We had a hard time matching film grain in Fusion. So, we used AE’s grain match to supply us with a grain sequence that we could use in Fusion. AE was used in the shots where Gwen was falling, as well as the wide shots where the crane arm takes out the second floor. AE was used for its retimer for motion blur, grain match feature, to create 2D explosions and shatter effects, as well as Trapcode Particular Plugin to create some smoke to add to our other practical and CG elements.

Is shake a large part of your pipeline? How are you planning to move on now that it has been discontinued by apple?

ED: Shake really isn’t in our pipeline as a compositing package. Shake is mostly used as a tool for stabilizing some of our plates. We do get a lot of Shake artists here though, and they tend to pick up Fusion quickly. We also use a set of Shake-like tools written for Fusion by Duiker Research. These tools help the transition a lot. As for Shake’s future, we are just sitting back and waiting for Apple’s next-gen application.

Did you have any shots you knew the result will look better in fluid fx yet due to time constrains you used particles instead?

RIF: Not really. CFD computations are embedded in the foundations of our effects pipeline and workflow so it was really easy and fast to use them in any shot that needed it. Of course, everything can be better with more time.

Did you use any 2d particles? If so for which shots and how were they used?

ED: We added 2D particles to fill in some holes on a couple shots where the second floor was taken out. We added some smoke and dust to the cracking floor. These were layered on top of the practical and CG smoke and destruction. They were created in AE Trapcode Particular Plugin.

How often do your clients visit your studio to see the shots in progress and are there any tools or procedures you use to make remote collaboration a smoother process?

Scott: During Spider-Man 3 there were no client visits to our studio, although we did visit Imageworks a couple of times. For remote collaboration we usually use CafeSync, a remote collaboration tool we developed in-house. Sony was accustomed to CineSync and so we used that.

How do you work with the graders on the picture in order to get the final comp to fit in seamlessly with the film?

ED: Sony was very specific about color. The studio either sent us color correction numbers to go with every plate or they sent us color corrected plates. In most cases we would match our CG elements to the plates and that would be sufficient. We did work with some custom LUTs and a color system developed by Duiker Research. These LUTs really helped us view the skies, and highlights in our monitor space. Typically values over 1 get clipped, which make it hard to see details in bright images. These LUTs allowed us to easily see these details to assure that we were delivering the best possible product.

What render engine did you use for final rendering of the shots that you worked on in the film? Was it mentalray, renderman or something else? Also, What the size of your renderfarm and what software do you use to manage your renders

AKIRA: We used a combination of different renderers for this show.

FinalRender Stage-1 for 3dsMax was used to render out Glass, dust, smoke, and debris. FinalRender Stage-2 for Maya was used to render out Papers, office debris and miscellaneous objects. MentalRay was used for the Crane and building, and some falling glass and debris.

We have over 1000 Intel-based nodes for our renderfarm. We use Deadline from Frantic Films to manage our renders.

General Questions

To Edwardo Mendez, as a compositor what’s more satisfying to do, the big flashy effects or making butter and eggs in a skillet so seamless that no one is the wiser?

ED: To be honest, I’m going to be greedy and say I like them both. The effects that you can integrate without the audience noticing are always a great thing. It is especially cool when critics write about how well your shot was filmed, and you know that it was completely artificial. On the other hand, having that killer huge effect shot and being able to make it work, have the director approve it, and audience love it is great. I love going to see a film and watching the reaction of people to your work, especially when it’s a good reaction. And typically it is those “big flashy effects” that get the biggest reactions.

How do you work with the guys on-set? Do you have on-set high-speed compositing artists or is all your post work done in house?

Scott: On-set we’sre providing the expertise to ensure that the plates being shot will do the job they’sre intended for, effects-wise. If any high-speed compositing is needed, it usually done with playback off disk, the camera video tap and a switcher. We do perform a lot of pre-viz though, before going on set, and that almost always helps to make sure things go smoothly.

When starting work on a really difficult shot, do you approach it as just another shot, or do you prefer to really understand the context and emotional state of it first?

Scott: Every shot has a purpose, and we always serve that purpose far better when we understand the context and emotional state of it.

Does it ever happen your live footage for a specific shot just doesn’t match the effect you’re trying to make? In this case, can you ask to shoot again?

Scott: That does happen, but it pretty rare. We go on set specifically to prevent those types of problems from occurring, but sometimes it unavoidable due to creative changes as the film goes through the editorial process. Re-shooting is extremely expensive though, and we have such huge capabilities. So we can often make whatever footage we have work either by re-timing, re-projecting or set extension.

Have you made any wish lists for shots the director asked to fix but you just couldn’t fit it in the schedule? If so, how many of these were done and how many weren’t?

Scott: There are always ‘CBB’ (Could-Be-Better), both from the client and internal. Our highest priorities were the client CBB, and for Spider-Man we addressed every one of them.

How often you were in contact with the director? At which stages you showed him shots for review?

Scott: Rarely! We were a subcontractor to Imageworks, and generally presented our work to Scott Stokdyk, who presented it to Sam Raimi.

Any tips or tricks for new and upcoming visual effects artists?

Scott: Hang in there. Be patient and methodical in your approach. Learn from the people around you, especially those with years of experience. Don’st be afraid to offer up your ideas, but always present whatever you were asked to do first.

RIF: Give it your best. Dedicate yourself to understanding what you are trying to achieve and understand the “insides” of the tools you are choosing. Too many talented artists get “choked” by the possibilities of the software, while the limitation should become the human brain only. Create your own personal standards, facing the deadlines given. After delivering, especially if completion was successful, go on and improve on your own time and come up with more accentuated and elaborated setups. Always attack the shots you have with the most pipeline-driven mentality. Try to build a setup that will not only allow your effects goal to be reached but will also make the next 100 shots achieve the same homogenic quality. At first it will take more time to determine rules to drive your particles versus animated keys and events. But in the long run, it will allow you to grow your workflow and capability to reach higher grounds.

What cool, mega feature films are in the pipeline next for CafeFX now?

MARY: The Kite Runner, The Mist, and John Adams are currently in production at CafeFX. And there are several other projects that we’sll be announcing in the next few months. Stay tuned!

A Big Thanks to everyone at CafeFx for the Awesome Interview!
VFXTalk.com


0 responses to “VFXTalk Interviews CafeFX on Spiderman 3!”