A typical visual effects pipeline is usually broken down into three production stages; pre-production, production and post-production.
The pre-production stage is the initial planning and preparation of a project before the filming (production) begins. Pre-production is known to last between a few weeks to over a year, depending on the scale of the project.
Research and Development
Research and Development, also referred to as “R & D”, is a stage of pre-production where custom tools can be programmed to run more efficiently to save time, or can be completely created from scratch for tools which do not exist. Sometimes these tools are written as plug-ins for existing software such as Maya or Nuke, however they are often released as stand-alone programs. Large visual effects houses will usually have a dedicated Research and Development team of programmers producing tools to be used company wide.
Screen tests provide an opportunity to demonstrate and show-off to potential clients and usually include the following elements:
- A distinctive look / style which may be carried over to the final project
- Showcase a piece of existing or new technology that will aid in creating actually sequences during production
- Created by strictly following concepts given by the client, giving them confidence that the project is within expectations and is achievable.
Once accepted, the team would move onto the design and creation of models, as described below.
Modelling would occur throughout the production stages but often begins in pre-production as low-resolution models are created for pre-visualistation and screen tests. The client will often supply concept art and sculptures of elements of the sequence they wanted to be re-recreate using 3D modelling software. These models will be produced to varying quality levels:
- Very High Quality – Models for Final Renders
- Medium Quality – Used for Animation
- Low Quality – In creation for Pre-Visualisation
Pre-Visualisation, often shortened to “Pre-Vis“, is the final stage of pre-production. Once story-boards have been sketched out, a team will use this as a reference and begin blocking out and rendering a scene / sequence using low-resolution models. This provides a better visual representation of how a scene will appear and work, allowing you to experiment with camera angles and layouts before committing on production.
Production is where final source content is created or recorded, usually on a set. At this stage the content is raw, however the better you can make your source material, the easier the following stages become. In terms of visual effects, this is essential time to collect as much reference material as possible in preparation for post-production.
Reference Images / Photographs
At the very start of when production begins, a representative from the responsible visual effects will be on set to shoot as many reference photographs as possible. The resulting images will be used as reference for the entirety of the projects from modelling and texturing to lighting. These images should provide enough visual information of the scene as possible, to allow you to re-create it precisely using 3D software.
LIDAR / 3D Digital Scans
In addition of reference photography, LIDAR surveying technology will be used to provide a 3D scan of the set, including props and actors. These scans can create highly detailed 3D models and can contain many millions of polygons, a number usually too dense for rendering. A 3D artist will use these scanned models to recreate them in a more render friendly manner that will line up precisely with environment on set.
High Dynamic Range (HDR) Photographs
In further addition to the above, a number of HDR photographs of the sets environment will be taken. In aid of building believable computer generated environments, the artists need to know to add light to a scene so that it is identical to that of the lighting on set. These will often be taken in two 180 degree fish-eye photographs to get a complete 360 degree look and lighting of the set. HDR photography is used because it provides most of the shadow and high detail as possible, as a result, gives a better reference.
Once the film shoot is complete, the final stage of production is for the client to scan the selected shots from film to digital. Usually each frame will need to be scanned as a HDR image (described above), to retain as much detail as possible.
Post-production is often the largest part of a projects visual effects timeline and can continue for months, or even years. It is the phase where the majority of the VFX work occurs, combining the final original footage with CGI in order to produce a final composite.
With the majority of films today are still shot using film, blemishes such as dust and chemical patches will have to be removed. Any remaining imperfections will have to corrected digitally by a team of compositors.
Once in possession of the film scans they will go through a process called grading. The process consists of editing the frames so that the colour, exposure and brightness are consistent throughout, therefore providing an overall and recognisable look to the film.
Now once the models which have been worked on since pre-production are complete, they will require to be rigged in preparation for animation. Rigging is bounding a model to a system of joints and control handles to allow animators to adjust these to produce poses and fluent animations without breaking the mesh.
Tracking and Match-Move
Match-moving, also known as motion tracking, can be used to track the movement of a camera through a scene to allow it to be recreated in 3D software. As a result, when any CGI element are composited with the original footage it will have perfectly match position and perspective, therefore more realistic and believable.
Animation consists of combining the created models with their newly added rigs and manipulating them to bring them to life. Animation can be applied to numerous elements within a VFX shoot including:
In fact, you animate anything that is required to move. Animators will usually work with medium detailed models (as discussed above), as this provides plenty of detail to be positioned accurately without being overly dense that would cause long rendering times.
Effects animation includes the simulation of anything falling under the following categories:
- Rigid- Body Dynamics
These animations are most popularly elements such as fire, smoke, explosions and rain.
Once again when the model meshes are complete, they are then ready for texturing. We us texture maps to apply detail found the surface of an object / character which is applied to the model by “wrapping” around it. Numerous types of maps can be applied to a model to achieve varying detail and effects:
- Diffuse Maps – Usually the base colour of the model
- Specular Map – Used to show how reflective a surface is
- Bump / Normal Maps – Give the appearance of higher detailed geometry and a more 3D appearance
I talked more about texturing and texture types in an earlier post, linked below.
The look development stage is a process of developing the appearance of a 3D scene by compositing separate passes together in different variations. This is usually a combination between texture maps, shaders and lighting to achieve the desired look of scene or film, ensuring there is artistic consistency throughout.
Lighting and Rendering
Now animation and look development have been finalised and agreed, the lighting artists begin adding the lighting to the final CGI, using the HDR images of the scene mentioned earlier, as a reference. These light maps will be combined with shaders created by the look development team to recreate the conditions of the original filmed environment.
Rotoscoping, in terms of visual effects, is a technique used in preparation in putting a digital element behind an element in the original footage. An artist will draw around every character / object on every frame which requires a digital element to be behind it, allowing the compositors to to re-apply the original footage upon it.
As well as using CGI passes from the elements animation stage, its often a better idea to record isolated natural occurring effects upon green screen. Visual effects houses compile large libraries of these effects as they use them, giving compositors quick and easy access to any effects they require to add to finish a shot.
The final stage of the visual effects pipeline puzzle is compositing all the computer generate, elements and original footage together to, hopefully, create a seamless finished sequence. A good compositor will use a variety of techniques to combine all the passes so that the final image looks as if it had been originally filmed, and not computer generated.
Once complete, the final pass is sent off for approval. If accepted, this is then delivered to the client ready for release.