Sending out RED footage for VFX/CGI

One of the big advantages of shooting with a RED is how well the footage plays with visual effects. However, there’s a certain workflow to follow to make sure your VFX turn out as best as possible.

Shooting for VFX

  • The best scenario is to have a “visual effects supervisor” on set to keep track of things I’m about to list. Even if you have all of this in mind as a DP, you’ll be busy just trying to manage the shoot, so important things can slip by.

  • Shoot in a non-drop framerate, like 24fps. Drop framerates like 23.976 can cause problems, because 3D software like Cinema 4D doesn’t support them.

  • Remember the exact camera model you’re filming with. Specifically, your VFX artist may need to know the sensor size of the camera.

  • Write down the focal length you used for each VFX shot.

  • Avoid lenses with heavy distortion/fisheye. This can cause headaches with trying to match CG to your footage.

  • Write down anything that contributes to a crop factor, like some lens adapters or recording settings.

  • You might need to lower your shutter angle to reduce heavy motion blur and make your footage easier to pick apart. Proper motion blur can be added back in in post.

  • If your shot is moving, like a dolly/slider/steadicam, you need to have parallax in your shot that’s easy for software to follow. You might want to walk through planning these shots with your VFX artist/supervisor.

  • Capture as much reference of what’s lighting your shot as possible. Matching how a CG element is lit to your shot’s lighting is a major part of making it look real. At the very least, you could step back and take pictures of your lighting setup with your phone. If you wanted to go the full mile, you’d put reference balls in your shot at the beginning or end of a take. You’d also capture an HDRI using a DSLR or 360-degree camera.

4k? More?

With all of the talk around 4k through 8k, you might be surprised to find that a lot of VFX-heavy productions like Avengers: Endgame and Ready Player One still follow a 2k pipeline (a pinch above 1080p) in post-production, even though they may originally film way above that. Sometimes not, as the digital for Ready Player One was shot at 2.8k. This is a pretty big topic that could take up a whole article by itself. But the short version is: most things work out being shot between 2k and 4k. For a “safe middle ground,” I’d shoot at 4k. There are benefits as you go beyond 4k towards 8k, but it’s not the end of the world if you let it slide.

Generation loss

“Generation loss” is the gradual loss of video quality as a shot is converted or exported multiple times. The ideal situation is to return the finished VFX shot to you with minimal-to-zero generation loss. The format you deliver your shots in has an effect on this. However, people can make a bigger deal about this than it really is. This is clarified a bit more in the section below about ProRes & DNxHR.

Handles

If you’re providing shots pulled out of your edit, you should include the clip with “handles” — an extra 1-4 seconds of the original footage at the beginning and end of the shot. You’ll want to communicate where the clip actual fits into your edit, by pointing out frame numbers/timecodes in the clip, and/or providing a reference cut.

Unprocessed image

Deliver the shots in their original state, with all color correction, grading, and effects turned off. This is super important! The idea is to work the effects into your shot as if it was in the original image captured by your camera — and then all of the color adjustments you make after that help to glue the final result together.

Reference Cut

Provide an export of your full timeline for the VFX artists to reference how the shots fit in context of everything else. This doesn’t have to be top quality — H264 is fine.

Method #1: Deliver R3D’s

Delivering original R3D files is the most conservative approach in terms of generation loss. It’s worth noting that you can create shorter (“trimmed”) versions of your R3D files (with zero loss of quality) using RED’s software, RedCine X Pro.

Method #2: ProRes or DNxHR

A common strategy to conserve transfer file sizes is to send the shot straight out of your edit, with handles, as ProRes 4444 or DNxHR 444. This introduces a step of generation loss, but the loss is negligible in many situations. ProRes and DNxHR are called “intermediate codecs.” To drive the point home about generation loss, a lot of high-end, big-budget productions not only transfer footage around in intermediate codecs, but even film to intermediate codecs (not raw). For example, a lot of Game of Thrones used ProRes throughout the production pipeline. Bear in mind, though, that this is almost exclusively about the “444” variations of ProRes and DNxHR, which are “virtually lossless” formats. HQ variations like “ProRes HQ” are “fine but pushing it,” useful when you really need to conserve some space.

Method #3: OpenEXR sequence

This is often an impractical path because EXR sequences yield absolutely huge file sizes. It’ll come out way bigger than your original R3D file! But some key VFX software is conditioned to play best with EXR’s, and in that scenario, your footage might end up here either way. You’d just be saving the VFX artist a step. Go with EXR if you can, as it’s the “formal” workflow and the best way to get perfect results. But understand that sometimes it just isn’t viable, and you have other choices.

Richard Blasco