Monthly Archives:

March 2016

Anamorphic

Anamorphic Chop Shop – Fixing Mumps

March 27, 2016

Use Photoshop’s Spherize filter to fix the uneven stretch noticeable in some anamorphic adapters. Also, how to export an image sequence from Premiere, record an action in Photoshop, run it on a batch of images and re-import back into Premiere.

USEFUL LINKS:

All the RED links on this post are part of eBay’s Partner Network, so if you purchase anything through them, you’re helping me to keep this project going.

You can support this project on Patreon. Make your contribution and help the Anamorphic Cookbook!

Welcome to the last episode in this post-processing chop shop streak. Today I’ll explain how to fix anamorphic mumps using Photoshop’s Spherize filter.

Mumps are that weird looking stretch that you get sometimes, when some parts of the frame look overstretched – mainly the center – and the edges still look compressed. You don’t necessarily have both things at the same time, it’s usually one or the other. Back in the first days of anamorphic in Hollywood, many actors and actresses included clauses against anamorphic lenses in their contracts for these lenses rendered unflattering images of themselves because of mumps! If you wanna know more about the subject, check Chapter IIIC of the Anamorphic on a Budget. This bizarre effect is due to the cylindrical glass of the anamorphics which isn’t even across its width, resulting in an uneven stretch across the frame.

Here’s a shot with a case of mumps and here’s the same shot, fixed. So let’s get to work. Start by firing up Photoshop and bringing in your messed up frame. Now go to “Filters”, “Distort”, “Spherize”.

On Spherize’s pop-up window, you can distort things. The default mode will make you worry if this filter is actually any good, but when you change it to Horizontal, you’ll see its power. You can go with either positive or negative values. Positive will grow the image from the center, which is the opposite of what we want, and negative numbers will pinch the image towards the center, making the edges more stretched and compressing the center – the most important part here is that it doesn’t change the width of the file. This process involves a lot of guessing if you don’t have a reference, like a circular object in the center of the frame. The good thing is, even if you guess wrong, it’ll still look better than the original! When you’re done just press “OK”.

Save your fixed frame. Yay! Now repeat the process for the next five hundred frames in this shot! Sounds painful, right?

On the next few minutes I’ll explain how to convert a video file into an image sequence using Premiere, then we’ll record a Photoshop action, run the action on a batch of images automatically and re-import them back into Premiere.

In Premiere, create a New Sequence using your shot and then export it by either going through the menus “File”, “Export”, “Media” or pressing Ctrl + M. On the Export window we’re gonna change the encoding settings to an image sequence. On “Format” you can go with Targa or PNGs for no or little loss of information but rather big files, or just JPEGs for lighter files and heavier compression. Since this video is going to YouTube, I’ll go with JPEGs, but if it were 4k for a feature film, I’d choose a lossless format such as Targa.


Choose the destination of the rendered files and hit “Export”. Let it process.

Back in Photoshop, open the Actions menu – “Window”, “Actions” -, create a new action – I’ll name it “Fix Mumps” – and set it to Record by pressing the small red circle. Now open your first frame, apply the Spherize filter – tweak the settings until it looks right -, save the file in a new folder – mine is the shot name + fixed – and close the image. Hit Stop to finish recording the action.

Now on the “File” menu, go down to “Automate” and then “Batch”. On the Batch window, select “Fix Mumps” from the “Action” dropdown, leave the “Source” as Folder and click “Choose…” to navigate to the destination you exported your frames. Check the “Override Action ‘Open’ Commands” and “Suppress File Open Dialogs” so you don’t have to click something every time you open an image. Now on the “Destination” side, it’s also a Folder, and you get to choose it. I’ll use the same folder I just created, with the shot name + fixed. Tick “Override Action ‘Save As’ Commands”. For the file naming section, in the first line, set it to “None”, name the shot, second line, just for organization matters, “None” again and I’ll add a _fixed_ suffix. Then, on the third line, from the dropdown pick 4 Digit Serial Number, then input your starting serial number down there, in case it starts at a value different than 1. On the fourth line, add the extension.

Click “Ok” and let your computer think for a little while. Photoshop is gonna flicker a little bit while it runs the process on all the frames and saves them to a new destination.

After that’s done, we’re back on Premiere: in the Import window, go to the folder with the new frames and select the first one, then check the box that says “Image Sequence” and hit “Ok” and that’s done! If it has a weird frame rate, right click on it at the Media Bin, go to “Modify”, “Interpret Footage” and input your frame rate manually.

Ok, that ran a little longer than I planned for. Again, I hope this is a useful tool now added to your skill set. Subscribe to the channel for new videos and check my blog for all the cool anamorphic content. I’m Tito Ferradans and I’ll see you soon! BUY THE SHIRT!

Day-to-Day

Finnegan!

March 22, 2016

I’ve come to realize I haven’t been posting anything about life lately, just anamorphics and and technical things. Time to fix that!

Langara has gotten less worrying than it was in the beginning and we’re heading towards the end of the first semester. – I wrote this sentence a week and a half ago and I couldn’t have been more wrong. These last few days have been an overload of essays and research and editing at full speed. Anyway, it’s the end of my first semester and I’m not taking any classes during the summer. It’s time to hunt for work and shoot lots an lots of amazing footage. I’m not yet 100% sure of how I’m going to achieve that, but it will happen.

The end of Winter and start of Spring were incredibly (and unpleasantly) rainy. It rained almost every day. When a day started out sunny and warm, I decided to take the bike outside. At my furthest point away from home, rain would start and soak me to my soul. Even waterproof clothes weren’t enough and I had to wash my waterproof shoes (see the contradiction there?) at least two times and dry them on the heating system for a minimum of two times per week. Truly annoying stuff.

On the less annoying side, I replaced the chainring on the bike, making it harder to ride. It actually just made it better, since the original chainring already felt too light for flat stretches of road. I struggle a little more while going uphill, but I’m much faster going down and straight. This is also good because it encourages me to explore different paths, trying as hard as possible to avoid steep hills. It’s an unusual way to explore the neighborhood, I admit, but it’s very efficient, and riding is so much fun that I don’t even bother with the rain while it pours down my face. The post-biking stage is when the complaining starts.

Enough of rain and back to Langara: we’re shooting some short films in one of my classes. I didn’t expect this at all when I enrolled and I’ve been having a blast with it. Being on set is an amazing feeling that I had been away for too long. We shoot a short per week, in a 3-hour interval, and every week I realize something I needed and forgot. This led me to buy lots of tape, clothespins, a foldable knife and a decent box-cutter, a voltmeter, zip ties, spare lamps, pins to threads adapters, rope, an extra dimmer and a couple adapters for connecting several lamps to one outlet. Thankfully none of these items costed more than $5. Now I have a box in my closet full of electricity and gaffer stuff, something I promised myself I’d never have back in Brazil (but that I also had, even in greater volume).

Being on set so constantly switched back on my cinematographer side and I’ve been studying cameras, lenses and stabilizers a lot more recently. I’m putting together all the stuff I need to sell, organizing what I’m selling here and what I’m keeping to sell back home. The goal is to reduce the overall gear count. Later on I’ll start planning about optics and start considering upgrading my lens kit (which hasn’t changed much over the last few years and it’s not that amazing).

I’ve also been shooting many things for YouTube and bringing up the complexity level, so they take longer to be completely done. This ties into another thing I was observing recently, which is an unhealthy habit of cramming too many challenges at the same time into one single project and having nightmares both in pre and post-production because I’m still learning how to sort out either the files created in the process or a different software package I decided to experiment with, or something weirder. Realizing this has allowed to drop some of the major issues that came up due to unnecessary experiments. It’s a moment of clarity that happens in which I think “which one of these two clashing things is more important for this project? Is it the lighting or a shady camera setting? Is it the new software and processing or the camera and resolution?

ALSO, we got a cat! His name is Finnegan and his the cutest, sweetest and craziest cat ever. At first he was scared of everything and everyone, but now he’s quite familiar with us and the house. I think he’ll start appearing in some of my test videos soon enough so you get to meet him in motion, not just in pictures like this one.

It was weird writing this post. Deal with it.

Day-to-Day

Films and Sound: an Unbalanced Relationship

March 21, 2016

Modern cinema follows a strict set of rules, a long preestablished language composed of conventions and agreements employed and accepted respectively by both the filmmakers and the audience. For example, when we see a close up of a character’s face followed by another close up of his or her hands tightly clasped, we understand that character is trying to hide emotions from another character who is looking at them, but he or she can’t hide it from the filmmaker’s gaze. We do not think that the first close up is a body-less head, nor that the second is a pair of chopped hands, but creating this meaning for the audience – that these shots were clues given by the filmmaker on the inner workings of said character by showing contrasting emotions expressed through their bodies – was something that took years to perfect. The same way these conventions are valid for the images (framing, lighting, movement), sound also has its own language. The first movies had no sound the same way that the first recordings had no images; these were separate mediums and their merging is what truly represents cinema. With the passing of time, filmmakers’ use of sound evolved into different meanings other than plainly representing what is being shown on screen. Sound can heighten or even completely inverse our perception of images. If a movie’s sound just portrays the obvious, its filmmaker is missing out on great opportunities to add depth and meaning to the story.

Cinema can’t exist without images – that’s obvious – the same way it can’t exist without sound. At first this might sound a little daunting since, for a long time, films were silent. Silent, yes, but not completely devoid of sound. Michel Chion is a French music composer and professor of audio-visual relationships at the University of Paris. He is a theoretician that thinks very closely to Walter Murch’s ideas (Chion dedicated his book Film, a Sound Art to Murch and Murch wrote the preface for the translation of Chion’s Audio-Vision); Chion states that “the suggestion of sound was in the air at the time. . . . In the silent cinema, instead of sound for sound, spectators were given an image for a sound: the image of a bell suggests the sound of a bell” (Film, a Sound Art, 5), and even the exaggerated gestures meant to translate as what the characters were saying so the audience could follow along and not assume all characters are communicating telepathically. In the very early ages, films such as Edwin S. Porter’s would have a commentator by the screen, narrating what was going on. Later on, this person was replaced by intertitle cards that would directly represent dialogue and bind together what would seem disconnected sequences. Even though we call them silent films, the silence of a movie theater has always been filled with music, at first any music would do, then there was music composed to be played specifically with the movie, and after that, there would be a kind of sound effect layer added to certain elements on the screen that could be represented by musical instruments for example cannon fire cued a drum in the theatre (Film, a Sound Art, “When Film was Deaf”).

Seeing this fundamental connection, several people tried to blend both things. Thomas Edison had both moving-picture and sound apparatuses by the end of the 17th century. The Phonograph was a device that could reproduce sounds recorded on a wax cylinder. The Kinetograph was Edison’s moving-picture camera and it came associated with the Kinetoscope, a one-man booth where the user could watch a short clip projected in a loop. The main person behind the Kinetoscope research and development was William Kennedy Laurie Dickinson, one of Edison’s employees. The first record of sound and image recorded together is Dickinson Experimental Sound Film: a clip, shorter than 20 seconds, of a man playing a violin in front of a recording Phonograph while two other men dance in front of the camera. These two elements (image and sound) were not synchronized until recently, by no one less than Walter Murch. “It was very moving, when the sound finally fell into synch: the scratchiness of the image and the sound dissolved away and you felt the immediate presence of these young men playing around with a fast-emerging technology.” (Murch, Filmsound.org)

The main difference between the results achieved by Murch and what the audience saw and heard at the Kinetophones (the resulting mix of the Kinetoscope and the Phonograph) back in the early 1900s was that the Kinetophones had no synchronizing mechanism. The images would play on their own, and so would sound, each of them from a different platform, so getting the starting point to match precisely was nearly, if not completely, impossible. It wouldn’t be for another two decades until synch could be completely achieved, by Lee De Forest, an American inventor who managed to record sound on an optical track directly on film. All previous attempts relied on having separate devices – film for the image and discs, cylinders or tubes for the sound – and the main problem hinged on starting at the same time and remaining in sync for the duration of the show. The transition from silent films into talking pictures is wonderfully depicted in Singing in the Rain, a 1952 film directed by Stanley Donen and Gene Kelly. In one of the scenes, the cast and crew is watching the premiere of the film along with the audience and we get plenty of examples of all the awkwardness that was introduced by having synchronized sound, including one of the most memorable examples of how out-of-sync sound and image might change completely the meaning of a scene.

Live-recorded sound didn’t have its place secured under the sun until late 1927, with the release of The Jazz Singer. Until then, talking pictures were still considered a fad and expected to bore the audience soon enough. The difference between other sound films and The Jazz Singer is that the latter had not only music accompanying the film, but also two very brief excerpts of live-recorded audio in sync with the film. That drove audiences wild and the movie’s box-office through the roof. From that moment on, for purely economic reasons, talking pictures were here to stay. The Warner Bros hit was sounded with the use of the Vitaphone – which was still a dual system: the sound played from a disc, not straight from the film. According to Michel Chion, the main problem that came with the definitive establishment of sound in movies is “the Vitaphone process was perceived . . . as an improvement, not a revolution” (Film, A Sound Art, 34). The audience’s approval of spoken lines was also a trap for films since speech prevailed as the main element of sound in films for good fifteen years, from 1935 to 1950, as presented by Chion, either through dialogue or voice overs. Filmmakers chose to make films verbocentric as that was the easiest way to please the audience and the producers at the same time, which led to having the sound design aspect of movies not as developed as, for example, the visual aspect (Film, a Sound Art, 73).

From 1927 onwards, the technological aspect of sound recording and playback as well as techniques for capturing, editing and mixing it improved vastly to what we hear at the theatres today. What didn’t change so much was the filmmakers and studios’ perception that

“Sound has an “added value, . . ., [an] expressive and informative value with which a sound enriches a given image so as to create the definite impression, . . . that this information or expression ‘naturally’ comes from what is seen, and is already contained in the image itself. Added value is what gives the (eminently incorrect) impression that sound is unnecessary, that sound merely duplicates a meaning which in reality it brings about, either all on its own or by discrepancies between it and the image.

The phenomenon of added value is especially at work in the case of sound/image synchronism, via the principle of synchresis” (Chion, Audio-Vision, 5)

Synchresis is Chion’s concept of connecting the words “synchronization” and “synthesis”, it is the thought process that binds together an image and a sound which are perceived at the same time (Audio-Vision, 63). For example, if we see a gun firing and we hear the sound of a whistle, we automatically bind these two together without as much as questioning how they are connected. “Synchresis is what makes dubbing, postsynchronization, and sound-effects mixing possible” (63) and this is the ultimate tool to re-signify the role of sound.

In his book about editing, In The Blink of an Eye, Murch talks about the difference between good and bad sound mixes, but his words also apply to the general use of sound: “[i]t depends on . . . how capable the blend of those sounds was of exciting emotions hidden in the hearts of the audience. . . Past a certain point, the more effort you put into wealth of detail, the more you encourage the audience to become spectators rather than participants” (15). When the audience becomes spectators, they stop thinking about the sound’s meanings and take it as a very detailed representation of what their eyes are absorbing, with no unique storytelling features but an empty attempt at immersion (Magalhães, 10). A few examples of this concept of excessive sounds are most action blockbuster released in the past few years, namely, any of Michael Bay’s Transformers movies: our ears are filled with small gears and engines and switching and pumping and nuts bolts to add realism to ludicrous robots, but in the end seeing those sequences without sound would not change the story even a little bit.

Back in 1929, René Clair – a French movie critic – raised the point that “[t]he visual world at the birth of the cinema seemed to hold immeasurably richer promise. . . . However, if imitation of real noises seems limited and disappointing, it is possible that an interpretation of noises may have more of a future in it” (93).  It is quite worrisome that this is almost the same issue brought up by Chion in 1994: “[r]evaluating the role of sound in film history and according it its true importance is not purely a critical or historical enterprise. The future of the cinema is at stake. It can be better and livelier if it can learn something valuable from its own past” (Audio-Vision, 142). The challenges are not dictated by technological limitations as they were in the beginning of the 20th century, the challenge now is how to not succumb to the common pitfall of using sound as a mere echo of the image or focusing exclusively on dialogue to explain every event that could have been explained differently. Fortunately there’s hope: the number of movies (or TV shows) where sound plays a role as important as image has been growing and it’s not hard to come up with a few names such as Barton Fink, No Country for Old Men, Breaking Bad, Apocalypse Now and The Conversation. An increasing title count does not solve the matter, though. In order to reduce this gap between sound and image, the teaching of how to make films has to go through some changes, from scriptwriting to directing, from shooting to editing, sound can’t be something that is just used to fill any awkward silence. Proper sound, with meaning, has to be planned and conceived from the earliest stages of a film.

 


Works cited

Chion, Michel. “Audio-Vision: Sound on Screen”. New York: Columbia University Press, 1994. Print

Chion, Michel. “Film, a Sound Art”. New York: Columbia University Press, 2009. Print.

Clair, René. “The Art of Sound.” (1929) Film Sound: Theory and Practice. Ed. Elisabeth Weis, John Belton. New York: Columbia University Press, 1985. 92-95. Print.

Dickinson Experimental Sound Film. Dir. William Dickinson. Edison Manufacturing Company. 1895. Web. 13 Mar 2016

Magalhães, Mayara. “O Som Escrito”. São Paulo: Universidade de São Paulo, 2014. Web. 13 Mar 2016.

Murch, Walter. “Dickson Experimental Sound Film 1895”. Filmsound.org (2000). Web. 19 Mar 2016

Murch, Walter. “In the Blink of an Eye”. 2nd Ed. Los Angeles: Silman-James Press, 2001. Print.

Singing in the Rain. Dir. Gene Kelly and Stanley Donen. Perf. Gene Kelly, Donald O’Connor and Debbie Reynolds. MGM. 1952. Film.

Anamorphic

Anamorphic Chop Shop – Cropping in Post

March 20, 2016

Continuing the Chop Shop series on post-processing, I’ll explain how to properly crop your footage and do digital pans in case you have resolution to spare!

USEFUL LINKS:

All the RED links on this post are part of eBay’s Partner Network, so if you purchase anything through them, you’re helping me to keep this project going.

You can support this project on Patreon. Make your contribution and help the Anamorphic Cookbook!

Tito Ferradans here for another post-processing Chop Shop. I received some questions about cropping in post – which is a necessary tool if your camera only shoots 16:9 and you want full control over your final aspect ratio. I’ll go over After Effects and Premiere, which are the ones I use on my daily life, but I’m pretty sure the idea translates over to any other editing/compositing software.

Building up on the previous video (aspect ratio), load up After Effects, import your footage and apply the proper stretch – check the tutorial if you’re not sure how to do that. Then create your composition with the desired aspect ratio – to find the proper height, divide the horizontal resolution by the aspect ratio.

Now drag the footage over to the composition, right click its layer and select “Fit to Comp Height” and voila! The coolest thing about this method is you can use the horizontal position attribute and keyframes to create digital pans and adjust the framing as you please!


In Premiere, the process is quite similar but with a few more steps. Import your footage, apply the stretch and then drag it to a new sequence. Now right click the sequence on the Project tab and select “Sequence Settings”. Here you’ll change both the height based on your project’s output. As I’m doing HD, I’ll just input 1920 by 800. Now the clip is too tall, so right click on it in the sequence and select “Set to Frame Size”. That will adjust the dimension of the clip so it fits properly, now move back to the Project tab and right click on the Sequence again, since it’s still super wide. On the Pixel Aspect Ratio dropdown, select Square Pixels (1.0).



You can do the digital pan in Premiere too, but it’s a less friendly process. On the Effects tab, under Motion, adjust the horizontal position to the initial point, create a keyframe by clicking on the little stopwatch, then move on the timeline to the point when you want the pan to end, and adjust the horizontal position to its final… position and Premiere automatically creates a keyframe there.

There you go, a few simple steps to control your final aspect ratio. I hope you found this tutorial useful, let me know what you think in the comments section. Subscribe to the channel for the upcoming videos and head on to the blog for the Anamorphic on a Budget guide and lots of cool stuff such as the Anamorphic Calculator and the pitch for the Anamorphic Cookbook! I also have an awesome t-shirt in the works, so grab yours now before the stock runs out. Ferradans out.

Anamorphic

Anamorphic Chop Shop – Proper Aspect Ratio

March 13, 2016

Addressing one of the most common questions out there, here’s how to properly stretch your anamorphic shots using Adobe Premiere, After Effects, Photoshop and Final Cut Pro X.

USEFUL LINKS:

All the RED links on this post are part of eBay’s Partner Network, so if you purchase anything through them, you’re helping me to keep this project going.

You can support this project on Patreon. Make your contribution and help the Anamorphic Cookbook!

Tito Ferradans checking in, how’s it going, anamorphic buddies? Here the Fall/Winter season continues, so today we have another post-processing chop shop. This is one of the most common questions I see around the web, from people who are just starting into this distorted reality that we love. “How do I stretch the footage properly?”. Well, there are several different methods. I’m gonna cover Premiere, After Effects, Photoshop, Final Cut Pro X – even though I hate it.

First, WHY do we need to stretch the footage? Well, the visual answer is pretty easy: “because things don’t look as they should”. Ok, and why is that? Because the anamorphic glass compresses the horizontal axis while keeping the vertical untouched. Good, so how much is that compression? The lens usually says and the most common values are 1.33x, 1.5x and 2x. There are a few odd 1.75x and other numbers out there, but once you get how to do it, the stretch doesn’t matter much. So now we know that the image is compressed by 1.33x, 1.5x or 2x in the horizontal axis.

Let’s trail off a bit. A pixel is the smallest area of an image, a tiny square with a unique color value. You noticed I used the word “square” just now, right? Nowadays most imaging resources (cameras and monitors) use a square ratio for resolution, which means that if you divide its height by width (or vice versa) you’ll get 1 as the result. Back in the DV era, cameras played with this ratio. 4:3 footage had a 0.9 pixel aspect ratio, while widescreen (16:9) had a 1.21 pixel aspect ratio. The sensor size never changed, neither did the recorded image, but the pixel aspect ratio told the editing program to stretch or compress the HORIZONTAL axis by that value. Hmm, this is becoming kind of useful, isn’t it?

Now we know that one can change a pixel’s aspect ratio, making all of them shorter or longer, according to a specific value, than the recorded image. Well, it seems this is what we need with all that stretch-factor thing.

Starting with Premiere, import your footage and then right click on the file here in the Media Bin. From the menu, go to Modify, then Interpret Footage. A pop up will open and, look at that, there’s a “Pixel Aspect Ratio” menu! There you just need to pick your lens’ stretch and voilá, it’s done, looking just like it should.

After Effects has a few different options. First, you can import your footage and do the same thing as Premiere: right click, “Interpret Footage”, then “Main”, and in “Other Options”, change the “Pixel Aspect Ratio”. To preview this change, enable the Pixel Aspect Ratio correction on the viewer by clicking this icon.


Another way is you just get your squeezed footage file, drop it into a new Composition, go into its Transform attributes, open up Scale and uncheck the chainlink here – this allows you to scale width and height separately. Once that’s done, just input stretch factor multiplied by 100 into the width field (it’s the first one). 133%, 150% or 200%.


This part here is not exactly necessary, but it’s how I like to do it since I always output everything at 1080p: messing with the Scale attribute will give me an image that’s larger than my final resolution, so I do the step above as a start, then check the chainlink since this is the new ratio I want between height and width and bring the width back down to 100%. This will reduce the height of the clip and make it fit inside the composition. It also leads to a higher VERTICAL resolution, since the the vertical pixels have been downscaled. Like shooting at 4k to output at 1080p.


Photoshop is almost like After Effects. I’m gonna use a still picture here, since I still do a lot of anamorphic photographs. To open the Image Size panel go through Image > Image Size in the menus or press Ctrl+Alt+I . Here, select “Percent” as the measurement – it’s usually set to “Pixels”. You can see a chainlink here too, looks familiar? It’s ON by default, so turn it off and, again, input the stretch factor multiplied by 100. 133, 150 or 200, or any other value, if you have a different lens. Hit Ok and enjoy your right looking photo.


For Final Cut Pro X, import your footage, then create a “New Project” and in the pop-up window, select “Use Custom Settings”. Then, in the “Video Properties” section, instead of “Set based on first video clip”, set it to “Custom” and pick “Custom” from the dropdown menu. In the first field, regarding the width, input your shooting resolution multiplied by your lens’ stretch factor. For example, here I have 1920 multiplied by 1.33x equals 2553. Adjust any other fields that you might want to change, and press “OK”.


Now drag your clip to the timeline, click on it and go to the “Transform” menu. Click “Show” to expand the properties, then on the little arrow by “Scale” to show both horizontal and vertical scale. Finally, put your lens stretch times a hundred in the X scale, and the footage should be perfectly stretched to fit the project window. The process is exactly the same for other stretch values. For 1.5x, I’m gonna use 2880 by 1080, and then change the X Scale, and for 2x lenses, I’m going with 3840 by 1080, X Scale set to 200%. Then, edit, export, do whatever, with your now good looking footage!

Phew, that was a lot of different options, I hope I was able to help anyone out there having a hard time with the software or the concept of stretching the footage. If you liked this video, you should subscribe to the channel for the upcoming ones and then head on to the blog for a lot more anamorphic content such as the Anamorphic on a Budget guide, the Anamorphic Cookbook and a Calculator that helps you figuring out whether your lens will vignette or not. I also have an awesome t-shirt you can get to help me with this project! Now go out and shoot some pretty pictures for me since I’m stuck at home. Tito Ferradans out for a rainy Sunday.

Anamorphic Day-to-Day

Season 12: Anamorphic Cookbook.

March 9, 2016

I’ve been working really hard on lensporn lately. This work resulted on these cool looking new banners to represent the twelfth season of this blog and an opening for my upcoming videos, starting in a few weeks from now (I’m still fine tuning everything in place, grading shots and trimming the music so the end piece is not quite done yet).






In the meanwhile, here’s the perfect chance for you to grab one of my new t-shirts showing people how crazy you are about anamorphics! It costs around US$30 and you can get it here to proceed with payment! Help me fund this book and all these posts!

Lastly, I’m also selling some of my lenses and gear, so I opened up a yard sale page which will be constantly updated with gear that must go!

Anamorphic

Anamorphic Chop Shop – Corner Pin

March 6, 2016

First post-processing tutorial, on how to fix a slightly misaligned shot that you definitely need to include in the edit. Corner pin is a very simple and easy trick to do either in After Effects or Premiere (or Nuke!). Every anamorphic user should be familiar with it.

USEFUL LINKS:

All the RED links on this post are part of eBay’s Partner Network, so if you purchase anything through them, you’re helping me to keep this project going.

You can support this project on Patreon. Make your contribution and help the Anamorphic Cookbook!

Hey there guys and girls, Tito Ferradans here for the first “post-processing” chop shop! Corner pinning is a very simple and useful trick for that slightly misaligned shot that you’d like to keep. The Corner Pin effect was added to Adobe Premiere in version CS6, just go into the Effects tab and find it under “Distort”. Now drag it over the shot you want to fix and go into the Effects tab for that shot. You have two different options now, one is playing with the numbers here, the other is using the mouse to drag the corners manually. For the numbers, the way ones to adjust are based on the direction the image is skewed. If the image is skewed to the left, use the Upper Left and Bottom Right; if the image is skewed to the right, use the Upper Right and Bottom Left. They need to go in opposite directions and always outwards (otherwise you’ll have black edges in the frame).


The way I check if it’s right is by looking at perpendicular lines in the frame, like buildings and the horizon. If the lens is misaligned, they won’t be perpendicular, and that’s what you’re aiming for. I strongly think this isn’t good for fixing flares, since that’s likely to mess with every real horizontal line you have in the frame.

The other option, besides playing with the numbers on the tab is pressing this little square with dots on its corners. This will show the corner pin targets on the footage window and you can drag them at your will until it looks right. Be careful not to change it VERTICALLY, since that will make your footage look weird and even more misaligned. Double check the numbers to make sure the second row hasn’t changed values.

If you prefer working in After Effects, it’s almost the same thing. Import your footage, drop it in a composition and drag the Corner Pin effect over it. Here I prefer to just drag the control points instead of playing with the numbers, but that’s because I’m super comfortable working in After Effects. I also keep Shift pressed while I drag, to make sure it only moves in one axis, not messing up the footage vertically. If you’re not so at home with AE, I strongly recommend keeping an eye on the values to be sure you’re not making any mistakes.


I’m gonna reinforce that this cheat just works for footage that is slightly off. If your shot is all messed up, fixing it will either wreck resolution or not look right at all, no matter how much you distort it, so always DOUBLE CHECK ALIGNMENT!

I don’t think I need to mention you can do this in Nuke as well since, if you know how to handle Nuke, you don’t need a tutorial on Corner Pin!

That’s it for this week. Let me know how interesting you find these post-processing tricks and I’ll keep working on them! Subscribe now – you really should! – and check my blog for additional articles and videos. Ferradans out.