Since Hovering Lights is kind of an episodic story, with scenes taking place across several different dates and times, I wanted to make them clear to the audience while avoiding “one week later” letterings. Each clip’s length is limited to 15 seconds, so cards between them aren’t that bad. For these timestamp cards I wanted something sci-fi looking but not full-blown sci-fi otherwise it wouldn’t match the amateur side of shooting with a phone. All the work was done in After Effects without any third-party plugins.
Good references were the beginning and ending of Cloverfield, where we can see some classified-looking cards, as the tape was collected and archived for future research.
Luckily for me, on the week I had to create these I started watching Knights of Sidonia on Netflix and its colorless palette caught my attention. Everything is kind of black or white – not many grey midtones – while not too white-clean. I can’t explain the look very well, so just check the trailer below. It’s futuristic, has to do with aliens and space and at the same time tells a very personal story instead of just focusing on combat.
And the luck I mentioned is related to how they present the episode’s number and title.
If something works, mix it up and test it out. I liked Black Mirror’s blinking, while the crosses and general shape from Knights of Sidonia were a good layout, the result was this, where almost every single element was hand-keyframed since it’s so fast. On the final design I’ll have a dirty glass overlay on top of some areas and background that isn’t 100% black or static, looks good, but not totally done yet, and I didn’t want to spoil it completely by posting here (the GIF turned out pretty heavy, as an even stronger reason towards the simplified version below).
For the other text elements (intro and credits), I still wanted to keep that feeling, so no other colors, while adding the whole redacted-state-secret idea. Cloverfield was one of the starting points, but Watch Dogs had some pretty neat motion graphics that relate to the same idea, in more of a techy and dynamic way (one of my other reel pitches was strongly based on Watch Dogs, since we mentioned it).
There are several tutorials and templates on how to achieve that effect, but I had a lot more text than those examples and I didn’t want the whole flickering nor that much glitching on the text since it’s gonna be very quick on the screen and people still have to read it.
The idea around the white blocks over a black background – instead of the traditional black lines over white paper, from printed media and real redacted documents – comes from the thought that the audience might not want to be completely blind by staring at a full-white screen with blinking stuff on it. Also, the movement to reveal and cover the names is like peaking briefly into that information, unlike Watch Dogs where the information is being revealed for good.
I still kept some of Black Mirror’s blinkiness to switch the the words on, combined with expressions to randomize the intensity and characters that blink on each frame. The white lines are masks and mattes combined with the text. Doing these animated titles instead of simple cards was only possible because of my After Effects and motion graphics background back in Brazil. Each section (intro, timestamps and ending credits) took me about a day to set up, and a couple more days to fine tune, organize and adjust everything in a way that changes and fixes can be made quickly and in a civilized manner.
Starting off easy to see if this is gonna work or if it’s too confusing.
This was a very simple shot, designed simply to introduce May’s character and the whole instagram-point-of-view thing. Since there’s a crazy TV in the room, I thought I could enhance that feeling by adding some flickering to May’s face, using The Kick. We set it to a bright blue tone and Petar shaked it at random intervals aiming at the wall, so the light would bounce back on her face without harsh shadows and in a very soft way that could be understood as a flickering TV.
When in post, the effect wasn’t as clear as I thought it would, too subtle, almost invisible, so it was time to try and enhance it. Happily, while checking the separate RGB channels I noticed that the blue hue of the light was very represented in that channel, but not on the others. Isolating the blue channel I still had an alpha that was too “diffuse”, including the side of her face that shouldn’t be so affected by the light.
Looking at the red channel, it could be used to “focus” my current mask on the desired areas. So I subtracted the red channel values from my current blue, blurred and increased contrast, of course, and got an alpha that looked like this and its area of effect behaves accordingly to the light flickers on the wall. When there’s no bounce, there’s very little white on the alpha.
Next step was to create a grade that represented the glowing areas. TVs are 5600K or 65000K screens, while most indoor lighting is 3200K, so the glow had to be bluer than my current white balance. Also soft, because May doesn’t have her face right up against the screen and not too intense for the same reasons.
Then, the last trick was to input the alpha as a mask to this grade and add a random expression to its mix, setting the minimum value at 0.5, which guarantees that the glow won’t ever be entirely off, but can flicker much more (and faster) than Petar’s movements, giving it more of an digital feeling from the cuts and changes of lighting on the imaginary TV screen – yes, I have watched TV without looking at the screen just to see how its light should act. And it does look like this.
It’s much more interesting in movement, but this GIF will have to do for the breakdown. The full reel is coming out soon enough.
Well, it’s my last break, between Terms 5 and 6, so it’s better to get things moving around here. I thought I would have plenty of time to keep posting useful information around here during the development of the reel and it turned out that I was… immeasurably wrong.
During Term 4 I kind of got one of the shots working, but the first three weeks of Term 5 were a real nightmare with the (I’m yet to discover a worst) feeling of not moving forward at all, in spite of working days in and out. Half the issues came from tracking, but I’ll get properly to it when writing about each specific shot. This post is just to introduce and name them all. Seven REAL VFX shots along with some small tweaks (or weird blends that I wouldn’t call VFX since we didn’t have classes to learn these) to shots that didn’t NEED them but benefited greatly from a couple nodes down the pipe.
I was thinking about saving the story and all, but that would stand in the way of the explanations, so be warned: spoilers ahead!
001_000 – Intro titles and timestamp. I don’t feel Nuke being very friendly for motion graphics work, so I just went back to After Effects, in which I have a good deal of experience to make these.
001_010 – Upshot of May talking about the TV while some light flickers on the wall and her face.
001_020 – Reverse shot, from her “point of view”. I had to replace the TV, remove the boom mic twice, replace the notebook screen and play a little with light colors and intensities. Also removed the TV and speakers brand and a small light on the speakers.
002_010 – May is working on her final assignment, I’m bothering her at the break of dawn. This one had a lot of cleanup because I reshot several times for the tacking never worked. Then the objects fly around and there’s some more cleanup for the metal objects on the table, camera projections for the smaller cans, texture painting, a little modeling, shading and lighting for the can close to the camera. I also replaced the Cheerios brand with Ceereals, so the school would advertise my reel.
003_010 – This was the first shot I worked on, and felt like the hardest one. Tracking was hard, and then there were the ships and foreground elements in front of them. Had to deal with 2D tracking, modeling and texturing, being smart about the render passes, some luma keying, integrating the ships, doing the “reveal from cloaking” effect, lens flares and volumetric light.
003_020 – My roto shot, I had May cut out of the plate in order to play better with the light behind her and added some dust/particle elements to the shot to make it more chaotic.
004_010 – Ruuuun! Lots of running, no VFX until the very last frames where I added some flickering to the lights down the ramp in order to hook up with the following shot. Roto and expressions did the trick.
004_020 – The fastest and heaviest VFX shot. All skills are shown here. The alien silhouette walks around and jumps onto a car. I think that reshooting this one about three times helped in figuring out what was going on. 3D track, set extension, RGB lighting, painting, animation, camera projections, roto, 2D track, expressions for the flickering, integrating the shadow, darkening the ramp, changing the color of the spec highlights on the car, this was fun.
004_030 – Small bits and pieces of running around the parking lot. I wanted to make it more hectic, but it’s so fast that I thought it wouldn’t make much of a visual difference besides adding a ton of work for shots that aren’t even 12 frames long.
005_010 – The outside of the garage is on a totally different place from the garage itself and there used to be a cut between these shots. I spent some time grabbing pieces from one and the other in order to achieve an almost seamless transition – a black line disappears mid process, but you have to know it happens in order to see it. Then there’s some more running and hiding. Ships in the sky mean 3D tracking, rendering and compositing passes, heat distortion, lens flares and so forth. Feels like a 3D version of the balcony shot, but much easier to make it work. Then I turn back to the garage and the alien is coming towards us.
005_020 – Another cut transition that was blended together combining pieces of two different shots. This one is still kind of a surprise thing because I haven’t put much thought into it yet. As soon as I have something working for May’s abduction, I’ll post it here. Meanwhile, we have all the other shots that we can talk about.
End Credits – will be analyzed with the opening titles and timestamps.
That’s it, shots have been introduced and will have their own posts.
During last week, I spent most of my days working in one assignment instead of making real progress with my reel. Why the heck did I do that? Well, I have to go back to last Sunday, while I did the rigid bodies assignment, linked on the previous post. During render time, while Maya wrote out all my frames and passes, an idea came to me. From the Set Extension class, I realized that this could be a very mindfucking assignment and I really wanted to play with portals. Have things crossing walls, or looking forward and seeing myself from up top, weird stuff like that. I couldn’t focus in class and didn’t take any of the plates we shot at the studio downstairs.
Then, the portal thing finally clicked, and I thought it would be super cool if I could go through it, using a phone app to choose my destination. I scribbled some notes of what had to be perfect, ideas of how to shoot it and work it out, avoiding as much complications as I could, from the beginning. For starters, I would love a very wide angle, first person look, with the phone on the screen, so I could show the interaction between the app and the real world, several different destinations, and finally, the one I’d cross to.
I let the idea grow for one more day and got to work on Tuesday, because I got really inspired to see if 1) I could get it done, 2) I could get it done IN TIME (which meant before tonight). I finished yesterday, and you can see the result below. After the video, I’m going really crazy and try to explain how/what I did during the process, because these breakdowns aren’t nearly enough to explain the mindbending I went through.
First of all, I thought about having a greenscreen and markers on the phone and replacing that in post, but I already had too much work on, so it was easier and faster to design and animate everything that would happen in the app in After Effects and just play it while recording my plates. In this process I had to find out android’s favorite video formats, screen resolution, how to output it from After Effects and, later on, add a guide soundtrack so I knew what animation was going to happen and when, in order to time my actions on the plate.
This took my Tuesday afternoon. When I got home I grabbed the camera and took a couple test shots of the corridor – where all the action takes place – because I wanted to go really wide angle, and my only available option was Canon’s 8-15mm fisheye lens. It has an amazing field of view, but the downside is that everything comes out fisheye-distorted, which means it’s impossible to camera project anything – which is key to my matchmoving and all the environments changing at the back.
With these distorted shots I tested Nuke’s LensDistortion node, which gave a kind-of-ok result, but really messing up with the corners of the frame. Also, it was impossible to bring it back to the original image – couldn’t figure out why. It creates a weird circle with black background and everything punched inside. anyway. For my plates, this could work. For the camera projections, not so good. I went then to see if there were any good “defishing” techniques using Photoshop and got great results with their custom presets for the Lens Correction filter. It requires you to install some extra free packages with Adobe Air, but it’s very quick and simple. In there, sometimes the 8-15mm showed up on the list, sometimes not, so I picked the regular 15mm fisheye along with a full frame sensor and the results were very very interesting. With these, I went into Maya and quickly buid some cubes to see if it was camera “projectable”. It worked.
Green light for the shooting then.
I started with the pictures for all the environments. Set the tripod at the same height and level for all the pictures, lowered ISO, closed aperture and selected a very low shutter speed so I could get as much depth detail and as little noise as possible. Not a hard achievement using the 8-15mm. Then I stuck a tape crosshair at my chest at the aproximate tripod height, so things wouldn’t look too weird when combined. I chose environments that would be very simple to build (hallways, which are pretty much cubes) and with lots of depth to them. There are some examples below.
Before shooting the plates I also took some extra pictures to help me building those environments and filling the holes in the projections. Measurements were also very important to speed up the process and make sure everything would match completely. All the corridors have about the same width, so that would also be a good way of spotting any weirdness that could come up.
I shot my first plate a couple times to get the timing right, focus pulls and camera movement. I tried to avoid covering the door with the phone as much as possible – to avoid roto – but that wasn’t so successful. Then I went down to the Sub Basement and shot my second plate using the same technique. I was worried about how to link them together – no idea at all, at the time of shooting – and if they would track properly.
To help me with the timing for the phone animation in the second shot, I have the animation playing for ten seconds and myself counting up. When it reaches ten, the glitches appear and the screen goes black. I needed to do all my transition and regret in around seven seconds, then look down, click the phone just to see it die and turn to the elevator. It was more complex than it looked.
After shooting everything, I was still worried about this fisheye look, so I undistorted all my environments and tried to project them. Got three out of four done in less than an hour. The laundry room seemed to have a lot more detail, it was already past 10pm and I should get some rest.
Day 2 – Wednesday – was a nightmare. Tracking worked well with the Nuke-undistorted plates, but whenever I tried to export the mesh created from the point cloud, it would crash, die, burn and so forth. Tracking took more than one and a half hour! In the meanwhile I tried to figure the logic behind crossing between plates and finally nailed what should happen. I didn’t get it to work on this day, though.
I finished all my projections while the tracking went bonkers, and by the end of the day I decided to ditch Nuke’s undistortion and go with Photoshop’s for the plates as well. I kept the node because I wanted to bring the fisheye look back into the final result, and that was achievable, with some cropping.
With all my projections done in Maya, I brought them into Nuke, along with the re-tracked plate – didn’t even bother with the point cloud this time, just got the geometry through another (fifth, or sixth, by now) camera projection. Aligned it all to the grid and exported as alembic. In Nuke, I placed all my other environments behind the door and animated them according to the app animation. I used cards and a random concrete texture to cover the gaps between them. Defocus with keyframes solved the focus pulling issues and oFlow got me the proper motion blur.
For Day 3 – Thursday – I refined my script, checked the tracking a thousand times and did all the roto work. This day went fast. I ran out of songs to listen while working, so I had to look for new stuff online. Doesn’t happen every day. So I rotoed (?!) the door out, and brought back the phone and my fingers when they went in front of the hole. The work on the door was awful, jumping as crazy, because there isn’t much stability and continuous motion when you are handheld walking while doing everything else at the same time. Each rotos had around 180 keyframes, for 250 frames of footage.
In Day 4 – Friday – my goal was connecting both plates, this was the cherry on top, because it wasn’t mandatory for the assignment, but I really wanted to do it, so I left it to the end.
What happens is: I have two 3d tracked plates, which means two cameras. First camera, inside my place, goes outside the door, which means that, from certain point, there is no more reference of the initial environment. The cut has to happen after this point. From the second plate, I needed to pick a frame not too close to the beginning, so I could transition from the first camera into the second – using a third camera and a keyframed parent constraint and keyframes for focal length because Nuke gave me slightly different numbers (12mm for one and 14mm for the other).
This part took half my morning, then I had to decide whether to fix the door’s roto or to add a portal effect that would benefit from the jaggedness of the mask. The principle behind the portal is the same as heatwaves. Noise that distorts what is around, affecting both foreground and background, changing constanly and waving around. I based it off the door’s alpha with good results. Had to do some keyframing to make it bigger near the end, fix colors and stuff. Then I noticed the portal was cutting into my hand as soon as the phone and fingers masks ended. More roto. Yay! Luckily, it wasn’t that much, and not so much movement either.
As soon as the portal looked good, I went back to the connection challenge.
I needed to decide exactly in which frame I was going back to live footage. Once that was picked, I had to camera project the very previous frame, using this final animated camera, and paint the gaps in the projection so it would fit between going out the door and turning into the other tracked camera. This was by far the most confusing part, in which I camera projected three different images that didn’t work, because I couldn’t figure out how to do it.
When I finally got it to work, all that was left was a crazy amount of painting and perspective to cover the huge amount of corridor that was untextured. Even harder, the painting had to be done UNDER the original texture, so the image wouldn’t jump on the frame it went to footage again. How do I know this? Of course, I painted it wrong the first time. The seamless paint took me way longer – like hours longer, getting to the point of painting single pixels that looked wrong. This was my Friday night.
Day 5 – Saturday – was light, I had to cleanup the markers outside the door, barely visible, and only for a couple frames, and figure out how to do breakdowns for this thing that almost melted my brain. Ended up oversimplifying because there was no way to explain all of this in a few seconds of video. Then, add sound effects, bring back lens distortion and a final grade.
This post has been sitting in the drafts folder for long before I shot, but since it was just recently that I was able to go through the workflow that I can really explain how things work. For my previous project (Zona SSP), we shot RAW, thanks to MagicLantern, using Canon’s 50D and 5D3. The results were terrific, but I can’t say it was easy dealing with the huge amount of files and their versions. Also, processing power was a must during color correction because each DNG file requires debayering before rendering, so, the final render took me some days.
This time, I wanted the increase in dynamic range from shooting RAW, but lighter files and less back and forth between conversions and small adjustments. While looking for decent information on this process, I came across this workflow at hackermovies.com which is quite interesting since it gets rid of the DNGs while keeping all the data. Since it was written in 2013, the steps are a bit outdated, so here’s a 2015 version of it.
First off, shoot your plates using the latest version of MagicLantern (for the 5D3, the latest version was August 27th, 2014). In ML’s menus, pick the MLV format because it allows you to record audio along with the pictures. Now you’ll have huge MLV files in the card, that you need to drop into your hard drive.
I always get rid of their own folders and keep just the files in a single folder per card. It works better with the following steps.
Now, there’s a lot of installing before we can take our files forward. You’ll need MLV Converter (v1.9.2 at this time) and the Converter requires a few things to be installed first.
1 – MLV RawViewer (1.3.3) – This is a quick playback software that doesn’t require installing and reads raw files real quick. Only works with MLV files, as the name says, and has some nice controls for Exposure and Whitebalance. MLV RawViewer already offers you conversion to DNG or proxy QuickTimes, and some other tools. At first, I was using it as my only step of the process, but there are some problems like:
a) the QuickTime files are HUGE. Very little compression, and follow whatever you have on the screen for exposure and white balance.
b) the output DNG files are regular DNGs and not Cinema DNGs (more on this right below)
c) accessing the program’s functions is a bit confusing and I kept going back to the download source to check what were the commands I needed.
2 – Adobe Cinema DNG Converter (8.7) – As said above, with this installed, MLV Converter will convert your MLV files into Cinema DNGs. These can be imported directly into DaVinci Resolve and Adobe Premiere, for a faster editing workflow if you don’t want any proxies in your way. For my workflow, the DNGs were not the final files, so they end up deleted anyway.
3 – IrfanView and its plugins – MLV Converter needs IrfanView to generate the thumbnails and help you deciding which clips should be converted and which ones should be skipped. Well, I don’t care about thumbnails because I’m not shooting randomly. If I shot a clip, there’s a chance it will be useful, so I end up converting all of them into proxies right off the bat.
After you install everything, load MLV Converter and navigate to the folder with your MLV files. Then you set the DNG folder and the proxies folder. At this point, I didn’t need the DNGs, just the proxies, for editing. So, that’s what I rendered out.
Then, using these files I went into Adobe Premiere and edited the crap out of all my takes, getting to what I think would be the “final cut”. From this I knew exactly which takes and clips I would need. Wrote them all down in a piece of paper and went back to MLV Converter. Now I didn’t want proxies, and also didn’t want to convert every single clip. I just checked the ones I needed in my edit in order to get them into post, and this time, as Cinema DNGs.
Extracting the image files from the MLV container can take a while, so let it think for a night (or class).
Ok, now you lost a huge amount of storage with all these files and conversions. From this point, I backed up the MLV files into an external hard drive, just for safety, and deleted them from my main hard drive. The same goes for all the unused proxies. If you think they’re OK to keep, it might be useful in case you need to change something in your edit. The DNGs are also massive folders, but we’re gonna get rid of them in a bit.
The next step is converting the DNGs into LOG files and encoding them using AVID’s DNxHD codec because it stores 10bit frames and has a very nice ratio between file size and image quality. This will all be done in Adobe After Effects, and requires a couple things too. Everything is still free, except for After Effects.
1 – Adobe’s CameraRaw (7.1) – Easy and quick, just install. It allows you to open RAW files, like the CDNG, Canon’s CR2, Nikon NEF and many others. It’s automatically installed when you have Photoshop in your computer, you might just need to update.
2 – Vision Color’s VisionLOG Image Profile for CameraRaw – This is the key to deleting the DNGs. When importing the sequences into After Effects, CameraRaw will pop up. Go to the camera tab and from the profiles dropdown menu, pick VisionLOG. It will squeeze almost (if not) all information from the RAW file into a regular 8bit image that can be expanded in post through grading and color correction. Do tweaks in any settings you think you might want to improve before clicking ok. You’ll need to do this step for every DNG sequence you convert.
3 – AVID’s DNxHD Codec (2.3.7) – Just download and install. Inside After Effects, create an Output Template with the settings you need (frame rate, bit depth and so forth). If you’re not sure what to do while setting the template, go back to the hackermovies.com tutorial that they explain every detail of this process.
For exporting, be sure to import your audio files for each MLV as well, and then you can follow their path for rendering out automatically using a script. I had just a couple clips (less than 20 total), so I did this all by hand. The render from DNG to DNxHD also takes a lot of time, and you might have trouble if you didn’t shoot with standard 1080p resolution (I shot at 1728x1290px), check this at hackermovies too if necessary.
At this point you’ll have bunch of flat-ugly-looking clips that will make your director/producer worried if they see them. In Nuke I used the VectorField node, combined with the 3D LUT provided by Vision-Color to bring everything back to a “standard” look and work with this for previewing instead of the LOG clips. If you don’t have any idea of what a LUT is or does, check this and be happier. I’m still having a little trouble at this step – because I’m not 100% sure of my input and output colorspaces – but I’m getting there.
If you’re not going into Nuke, I know Premiere has an effect for LUTs (Lumetri) and After Effects CC might have something internal as well. If it doesn’t, just get LUT Buddy for free at Red Giant.
Of course, you can use dozens of other LUTs instead of just bringing the LOG images back to REC.709, but that’s up to you. I’ll just say that Vision-Color has some amazing options with ImpulZ.
After you finish the conversion of the DNGs into DNxHD, you can delete them all and your hard drive will feel relieved. If you’re brave, you can also get rid of all those QuickTime proxies and replace them with the DNxHDs plus LUTs. Chances are the DNxHD files are around 40% lighter than the proxies, which also gives you some more free storage. I wish I had some pictures to make this post look better, but I don’t want to give any spoilers away more than I already did! If you try following these steps and get stuck, feel free to leave a comment or message me that I’ll be glad to help!
The second part of the previous post, and this is where things should start to get at least a little more interesting.
Working on my demo reel for Vancouver Film School, I decided to, besides all the VFX stuff and technical aspects of it, try some new stuff with cinematography, experimenting with a style that always gets my attention for it brings together a series of elements I believe work amazingly in terms of immersion and getting the audience into the story head on. That is diegetic cinematography, is making the camera a part of the characters’ world. It’s an object that they see, use and interact, which is also used to tell the story. This has some immediate consequences that aren’t standard through film history like “there is no fourth wall”, the characters know they’re being filmed, they interact with the camera but just because they have a relationship with the one holding it.
We’ve seen it before several times among sci-fi – Chronicle (2012), Cloverfield (2008), Project X (2012), Project Almanac (2014) – and the whole genre of “Found Footage” horror – Blair Witch Project (1999), Paranormal Activity (2007), V/H/S (2012) or [REC] (2007) among many others. I won’t focus on the horror movies at this point. There are huge articles about the Found Footage genre, and I’m no expert, but I’d like to discuss what this kind of camera work brings to the story: first of all, the audience knows exactly as much as the characters. Hitchcock says the key to tension is giving key information to the audience, that the characters don’t know about. Like whenever we know the killer is at the victim’s house way before the crime takes place in the screen. We – as audience – worry because we foresee what’s going to happen and it’s the wait that causes the thrill.
When the camera is a character, if the audience knows something, so do the characters, and here the thrill comes from the fact that we have the suspicion that something bad might happen or WILL happen, but we don’t know exactly what, when, or to whom. Whenever it hits, we’ll be as surprised as they are, thinking of ways out the same way they’re doing. For me, this is a boost in terms of imersion and also a challenge. Since we’re so close to the characters, whenever they act in really stupid ways we’re thrown out of the movie, they’re not convincing anymore. Like any horror movie, when people go to “check their basement when the lights go off”, or think it’s “a good idea to face the bad guys breaking into their homes”. Regular people like you and me would never do these things, I don’t have a hero complex, if I think it might be dangerous, I’ll flee or hide!
While reading on this subject to see what other people think, I came across a very small number or articles, none of them really deep, and with very different opinions, so it’s time to make clear that I’m not defending that the viewer is a character in the movie, since we see through a character’s camera. A movie is much different from a game, all the choices have been made from the start. There’s no interactivity, I’m not saying we’re seeing through the eyes of a character. I think first person videos are awesome, but I wouldn’t say that is diegetic cinematography because there is no camera. We act different when we’re just talking to someone versus when we’re on tape.
Using the argument of “we remember things” as a comparison to recording is not valid. We haven’t got to a point where people have cameras embedded into their eyes yet. The proof that people act weirdly on camera, even if the camera is someone else’s eye is all the hard times Dr. Steve Mann goes through. There’s also a great Black Mirror episode (The Entire History of You, 2011) about implants that turn our eyes into cameras, but that’s still sci-fi, and it takes place somewhere in the future.
Following this line of cinematography, Christopher Campbell wrote an article discussing why that is weird, bad in a standard way, but interesting if done properly since it’s different from anything we’ve seen so far. He specifically talks about Hardcore, currently in post, made by the same guys who did the Biting Elbows – Bad Motherfucker music video, which is shot entirely from the protagonist’s point of view (POV). Campbell makes a comparison between literature, games and movies, establishing a clear difference between books written in first person and movies in first person.
So, my definition of diegetic cinematography requires a physical camera being held by one of the characters. When that happens, the camera usually has a specific purpose inside the film itself. In Project Almanac they’re documenting their progress through an experiment, in Cloverfield, the guy is in charge of filming the farewell party for one of the main characters. Project X and Chronicle, though, have very different approaches that make sense in today’s culture. The title says it all, Chronicle, “a historical account of events arranged in order of time usually without analysis or interpretation”. There’s no manipulation of time, the editing just goes forward. We don’t see the same moment twice, we don’t have flashbacks or forwards. Phones can shoot video, we have a plethora of social networks based around video, or that support to video uploads (YouTube, Vimeo, Facebook, WhatsApp, Snapchat, Vine, Instagram and so forth). We take way more pictures in our everyday life just by having a half-decent camera in our phones. We don’t worry that much about framing, image stabilization and such for our videos. These are just small chunks of memories, shot in chronological order. They’re usually more important to ourselves than to others. Sure we share them, our current culture revolves around showing where we’ve been and who we met, all very much time stamped.
Nikon just released a whole campaign based around that, calling the current generation “Generation Image“. Not so long ago we had (we still have some) very long discussions regarding what “qualifies” a photographer. Are amateur photographs taken with a phone camera as valid as the ones taken by someone who studied the craft for years and use expensive gear with the single purpose of taking photographs? If we’re talking about interviews and scheduled events, sure, that is debatable, but what about natural disasters, conflict areas and other situations where stuff just happened, and when the Professional gets there, the event is already past? What tells better the story of a gas explosion inside a shopping mall: a high-megapixel count photograph of some ruins with sharp focus hours after the event or one taken in the food court with a phone, at the time of the explosion, all blurry, but good enough to understand what’s happening? Nightcrawler (2014) is a great movie kind of related to this subject, with lots of camera on the screen, but no main diegetic cinematography.
John Powers, when writing about Chronicle, has an interesting point when he says this shift between traditional media/cinematography and amateur recordings began back in 2001, with the attack at the World Trade Center. While most media networks were running towards the area to shoot their own footage, thousands of people around the buildings were already doing this on their own, simply because they could. I’ll come back to John’s review later on.
It’s not hard to tell the difference between a professionally shot video and one done by someone who had the sole purpose of recording the events. Actually, it’s quite easy to spot which is the Pro and which is the amateur. Then Hollywood comes in and turns the “amateur look” into a style. What are the benefits?
First off, it has a much more “real” look, as if that isn’t a movie, carefully written, planned and executed. We relate to the characters because that’s the way we’d film if we were in that situation. The handheld, shaky camera, also called “documentary camera” has this name because documentaries usually have small budgets and it’s focused on reality. Real people, real lives, real intentions, no actors. When the first portable camera came out, documentaries blossomed. After some time, what was considered a flaw – the shakiness and curiosity of documentaries cinematography – was brought into fiction with mockumentaries, behind the scenes that seem as amazing as the real scenes and much more.
Steve Bryant, in his article about the camera becoming a character, says this is a negative thing. What should work as a bridge between audience and show actually sets them further apart because there’s now two layers of fiction (behind the scenes AND the show) instead of just one (the show) and we don’t notice that. We feel closer to the actors, we think we know the ones behind the characters, when what’s really happening is the actors are acting about themselves as well. It’s confusing, but makes a lot of sense in the end. How does this relate to the diegetic cinematography thing? Well, the show wouldn’t count as diegetic cinematography, but the behind the scenes would, since many times the camera operators are just as real as their subjects.
Ok, so we got a reality and empathy bonus because that’s how the audience would film. I also believe this is much easier for the actors, because it’s much more natural. The hard part is making it feel right. Once you know how to handle a camera, professionally, it’s easy to make a mess and say it’s amateur. The key is to know how much messier it should look, which are the main points and reactions the audience has to see and which ones are best when just outside the frame. How close to danger are our characters willing to go? What’s their relationship with the person handling the camera? Do they care, are they pissed about it? Are they filming just to keep a record of events, or for sharing? And the ever present question “what’s more important now, the camera or whatever’s happening at the same time?”, this will mostly dictate framing and might have influence on editing as well.
There’s one issue, though. One downside that’s very hard to avoid: sequences of shitty, blurry, shaky images while our characters run. For many people, these are huge turn-offs. They feel sick, dizzy or worse. I myself have a high threshold for shakiness, but every once in a while I see something so confusing that makes me question the process.
This is what I’ve been experimenting recently to tell my story loaded with visual effects. I’m wondering which side will win: the reality from the cinematography, or the out-of-this-world aspect of the visual effects involved. I’m also gonna question the editing process, but this post is already too long and confusing to include that!
When I decided to go Instagram all the way I haven’t thought about technical specs for even a second. I created my account and started to upload random things in there – at least one picture a day – and for today I thought of trying to upload a short video, stop motion style. Edited the thing together in After Effects, rendered and tried uploading. Got an error saying “the type of video you’re uploading is not compatible with Instagram”. Ok, nice. How do I get a compatible one, then? Why isn’t this thing like YouTube, that you can send whatever you like and the conversion fixes all the issues? Would be too easy, I guess.
It wasn’t hard finding the proper specs for Instagram, though. Weirdly enough, the only way I got it to work was rendering through Premiere. Any file that came out straight from After Effects was “incompatible”. So, for the record, these are the output specs for Instagram: