DISCLAIMER: This was written long before the MLV format and real time players for MagicLantern’s raw files! More on that in the future, I hope.
The most complicated step on this process is dealing with the raw footage. The anamorphic bit is quite simple to work out, but both aspects will be properly explained on this chapter, starting with the easy one, not taking raw into account.
If the footage was shot on the camera’s standard codec, we can stretch it out to the correct aspect ratio directly in the editing software (FinalCut Pro, Adobe Premiere, Sony Vegas or whatever other program you use) by changing the clips’ pixel aspect ratio according to the lens stretch factor, without the need to render new files with the stretch applied to them.
Pixel aspect ratio options available in Adobe Premiere CS6
Working with anamorphic still images requires two steps, though. The first one, importing the raw image into Adobe Lightroom or Adobe CameraRaw (or any other software, really) and playing with its exposure, contrast, highlights, shadows or any of the many control knobs available so the image looks right in terms of light and color. The second step is loading this file on Adobe Photoshop in order to fix the stretch, since neither Lightroom nor CameraRaw give you the option to change the image’s pixel aspect ratio or its height/width relationship like the video editors do.
Now, raw: the first step is to set up the camera to shoot with the proper resolution, taking into account the lens’ stretch factor. Inside MagicLantern’s menu, this is very simple to set up.
MagicLantern’s RAW Video Menu
Some of the most common and desired ratios are shown below as a quick cheat sheet where you can set any two variables and get the third one.
|Shooting Frame||Lens Stretch||Final Aspect Ratio|
|Final Aspect Ratio||Lens Stretch||Shooting Frame|
These aspect ratios on the second table (1.8:1, 1.6:1 and 1.2:1) aren’t exactly achievable on MagicLantern’s menu, so the simplest way is to pick the closest proportion to what you want, only a little bit larger than your desired Final Aspect Ratio (for example, in Episode 01 we used 2x stretch lenses, so we should shoot with a 1.2:1 proportion, which isn’t available on the menu. The closest one to 1.2 available is 4:3 – which reads as 1.333 – just a bit larger than what we wanted, for safety. It’s always better to overshoot than the other way around. The last step is discarding this extra side portion during the editing/post-production stage.
Due to these different window sizes, files end up with different sizes as well and each card may store a little more (or less) footage. During Episode 02, 64GB cards allowed us to shoot ten minutes of footage. For Episode 01, since the window was smaller, each card lasted 17 minutes, which is a considerable difference considering how frantic a set can be.
After setting up the camera and shooting until the card is full, we get to the thickest part of the workflow. There are several options for these steps online, but here I’ll go over the one I used in both episodes, which had the goal of being as simple and straightforward as possible.
The card goes into a USB 3.0 reader and the files are copied over to the movie’s hard drive – also USB 3.0, since the files are massive and we have to go through them as quickly as possible to free the card so it can go back to the camera crew.
The files are not “playable” at this point, they’re simply huge envelopes that store the DNG frames (until now there’s only one software that can play them natively, the Drastic Preview Professional Media Player). In order to keep going down the flow, we must first extract these frames and generate proxy versions (lighter files with a lighter encoding, like Apple ProRes) for each clip that will be used on the edit and, later, replaced by the DNG sequences that give us a LOT of power for color correction, adjustments, effects and post-processing. They’re VERY heavy, though.
For this extraction process we used RAWanizer, a piece of software developed by a MagicLantern user, fulfilling the requests made by several forum members. There’s plenty of software options available for this extraction and conversion process, like RAWMagic or raw2dng, but for our Windows machines, RAWanizer was quite simple and quick to use.
Installing and navigating the software is easy. After install, just follow the steps:
Clicking on the “A” letter, Select Folder you set the folder where the raw files are. Depending on how you’re loading the files and organizing them, it might be a good idea to check the “B” letter, Watch Folder so the software checks the folder at all times looking for new files recently copied from cards. You just have to be careful when checking this option because if you delete the watched folder, the program might crash while trying to find it.
After selecting the folder, all the shots found inside will be loaded into the grey area marked by the “C” letter. Checking the Show Thumbnails box, RAWanizer will grab sample frames from the beginning, middle and end of each shot for preview.
The following steps will happen in the blue highlighted square.
On the Processing tab you set the steps the program will follow: if it will just extract the DNGs out from the raw container, if the processing will be done by an external converter or if all steps (DNG extraction and proxy rendering) must be completed for each clip before moving on to the next one.
After that, in the Video tab, you can choose the codec that will be used for the proxies from the dropdown menu (which says ProRes 444 on the image). It’s also important to specify the video’s frame rate, even though some times this information comes straight from the camera. With a little more advanced knowledge one can change the codec’s settings through code, being able to even stretch the proxies to the correct aspect ratio.
In File we check the boxes that must be considered after the raw file is processed. The ones that worked really well in our case were Keep existing dng files because even if we tried to process the same file twice, RAWanizer would detect the duplicate files and skip it altogether. The same goes for Keep existing video fles for avoiding double proxies.
Then the options for Delete dng files after video creation would erase the files we wanted so much for post, so, no checking that one. Delete tiff files after video creation was useful, though, because these TIFF files weren’t being used for anything else down the line.
Since many cards were being formatted inside the camera, and it follows the FAT32 file system, files bigger than 4GB were split in more than one chunk. The option Merge split files into original RAW file forces RAWanizer to stitch these chunks back together before processing the file. If this box isn’t checked, the software will just go through the first 4GB of the clip and skip the others. A way to avoid split files altogether is formatting the cards directly on the computer and selecting the exFAT file system which doesn’t limit to the file size.
The steps taken by RAWanizer are: first, check if the file has more than one 4GB chunk, if yes them all will be stitched together. Then, all the DNG frames contained inside the raw file will be extracted. The slowest step comes next, when generating the proxies, every DNG frame must go through debayering and an average exposure is applied to every frame and then they are rendered as TIFFs. These are then rendered together by FFMBC creating the proxy as specified in the previous menus.
The last tab to go is Folder which points to where each type of file created during the process should be stored, from the original raw files down to the proxies. Keeping these organized will be of great use when the edit is finished and you just need to replace the proxies with the DNG sequences.
After filling all these tabs, just press Start (letter “G” on the first image) and all the items listed in “C” will be transferred to “E” with checkboxes that tell you which files have been already processed and which ones are still up to go on the queue. The “F” window acts as a log, writing out which step is currently happening and if the previous steps were successful or failed. It’s always a good idea to check this once in a while to be sure everything is going as planned.
After ALL your files go through processing, you gotta hand the proxies to the editor. They have a terribly low quality – and exposure usually looks very weird – but it’s enough to check if the shot is in focus or not, for example. It’s very important to let the editor know the lens’ stretch as well so he/she can set the proper aspect ratio for the footage in the editing software.
When the editing is done (using the proxies), we went on to color correction, which could’ve been done with DaVinci Resolve or Adobe After Effects, through Adobe CameraRaw. Since I was much more familiar with After Effects, this was the package I chose. Just for the sake of information, from version 9.1.5 forward, DaVinci Reslve is fully compatible with MagicLantern’s DNGs, not only with Cinema DNG files.
After bringing my timeline into After Effects, I replaced the Apple ProRes proxies with their DNG counterparts. For every clip I replaced, a CameraRaw window popped up so I could set the image’s parameters. Keeping time as a valuable part of the workflow, I’ve set a couple defaults to be applied to clips that should match together, making sure the whole episode had a homogeneous look so the real grading would have to go all sorts of crazy to compensate drastic differences between shots.
In the composition settings I’ve set the frame size according to the “shooting frame” x “lens stretch” math we saw before. To stretch every clip I had two possibilities: through the “Interpret Footage” tab on the Project window, or selecting all the clips in the timeline and clicking the option “Transform > Fit To Comp“.
After going through all post-processing it’s time to get the final video out of After Effects. Once again, we have two possibilities: rendering a “larger than HD” frame or compensate the stretch by decreasing the height, instead of increasing the width. It’s good to keep in mind that it was just recently that YouTube started supporting resolutions higher than 1920×1080 pixels, so stretching the clips to 2880x1080px, for example, would present a greater risk of losing image quality since the downscaling from 2880x1080px to 1920x720px would follow YouTube or Vimeo’s rules, not your own. Even though you can go higher, most audiences are happy with Full HD or 720p resolutions.
My suggestion is to decrease the height of the clips on the final export and have full control over this scaling instead of leaving it into someone else’s responsibility. The height percentage’s for 1.33x, 1.5x and 2x stretch lenses are, respectively, 75,8%, 66.7% and 50%, which gets us 1920px width, but variable height according to the lenses used. This scenario is particularly useful for high compression codecs since you have a smaller area to fill with information, instead of blowing up all the compression artifacts over a larger frame. Since our goal was not only test the results online but also on 2k and 4k projectors, I ended up blowing the image up to a 3072x1152px resolution.
After all the color correction and effects were done, we decided to keep the 2.66:1 aspect ratio instead of the traditional CinemaScope, so the only step left was to export the final version of both episodes, bringing all this ridiculous amount of theory into practice!