Monthly Archives:

February 2015

Hovering Lights

pOrtal! – a technical making of.

February 16, 2015

During last week, I spent most of my days working in one assignment instead of making real progress with my reel. Why the heck did I do that? Well, I have to go back to last Sunday, while I did the rigid bodies assignment, linked on the previous post. During render time, while Maya wrote out all my frames and passes, an idea came to me. From the Set Extension class, I realized that this could be a very mindfucking assignment and I really wanted to play with portals. Have things crossing walls, or looking forward and seeing myself from up top, weird stuff like that. I couldn’t focus in class and didn’t take any of the plates we shot at the studio downstairs.

Then, the portal thing finally clicked, and I thought it would be super cool if I could go through it, using a phone app to choose my destination. I scribbled some notes of what had to be perfect, ideas of how to shoot it and work it out, avoiding as much complications as I could, from the beginning. For starters, I would love a very wide angle, first person look, with the phone on the screen, so I could show the interaction between the app and the real world, several different destinations, and finally, the one I’d cross to.

I let the idea grow for one more day and got to work on Tuesday, because I got really inspired to see if 1) I could get it done, 2) I could get it done IN TIME (which meant before tonight). I finished yesterday, and you can see the result below. After the video, I’m going really crazy and try to explain how/what I did during the process, because these breakdowns aren’t nearly enough to explain the mindbending I went through.

First of all, I thought about having a greenscreen and markers on the phone and replacing that in post, but I already had too much work on, so it was easier and faster to design and animate everything that would happen in the app in After Effects and just play it while recording my plates. In this process I had to find out android’s favorite video formats, screen resolution, how to output it from After Effects and, later on, add a guide soundtrack so I knew what animation was going to happen and when, in order to time my actions on the plate.

This took my Tuesday afternoon. When I got home I grabbed the camera and took a couple test shots of the corridor – where all the action takes place – because I wanted to go really wide angle, and my only available option was Canon’s 8-15mm fisheye lens. It has an amazing field of view, but the downside is that everything comes out fisheye-distorted, which means it’s impossible to camera project anything – which is key to my matchmoving and all the environments changing at the back.

With these distorted shots I tested Nuke’s LensDistortion node, which gave a kind-of-ok result, but really messing up with the corners of the frame. Also, it was impossible to bring it back to the original image – couldn’t figure out why. It creates a weird circle with black background and everything punched inside. anyway. For my plates, this could work. For the camera projections, not so good. I went then to see if there were any good “defishing” techniques using Photoshop and got great results with their custom presets for the Lens Correction filter. It requires you to install some extra free packages with Adobe Air, but it’s very quick and simple. In there, sometimes the 8-15mm showed up on the list, sometimes not, so I picked the regular 15mm fisheye along with a full frame sensor and the results were very very interesting. With these, I went into Maya and quickly buid some cubes to see if it was camera “projectable”. It worked.

Green light for the shooting then.

I started with the pictures for all the environments. Set the tripod at the same height and level for all the pictures, lowered ISO, closed aperture and selected a very low shutter speed so I could get as much depth detail and as little noise as possible. Not a hard achievement using the 8-15mm. Then I stuck a tape crosshair at my chest at the aproximate tripod height, so things wouldn’t look too weird when combined. I chose environments that would be very simple to build (hallways, which are pretty much cubes) and with lots of depth to them. There are some examples below.




Before shooting the plates I also took some extra pictures to help me building those environments and filling the holes in the projections. Measurements were also very important to speed up the process and make sure everything would match completely. All the corridors have about the same width, so that would also be a good way of spotting any weirdness that could come up.

I shot my first plate a couple times to get the timing right, focus pulls and camera movement. I tried to avoid covering the door with the phone as much as possible – to avoid roto – but that wasn’t so successful. Then I went down to the Sub Basement and shot my second plate using the same technique. I was worried about how to link them together – no idea at all, at the time of shooting – and if they would track properly.

To help me with the timing for the phone animation in the second shot, I have the animation playing for ten seconds and myself counting up. When it reaches ten, the glitches appear and the screen goes black. I needed to do all my transition and regret in around seven seconds, then look down, click the phone just to see it die and turn to the elevator. It was more complex than it looked.

After shooting everything, I was still worried about this fisheye look, so I undistorted all my environments and tried to project them. Got three out of four done in less than an hour. The laundry room seemed to have a lot more detail, it was already past 10pm and I should get some rest.

Day 2 – Wednesday – was a nightmare. Tracking worked well with the Nuke-undistorted plates, but whenever I tried to export the mesh created from the point cloud, it would crash, die, burn and so forth. Tracking took more than one and a half hour! In the meanwhile I tried to figure the logic behind crossing between plates and finally nailed what should happen. I didn’t get it to work on this day, though.

I finished all my projections while the tracking went bonkers, and by the end of the day I decided to ditch Nuke’s undistortion and go with Photoshop’s for the plates as well. I kept the node because I wanted to bring the fisheye look back into the final result, and that was achievable, with some cropping.

With all my projections done in Maya, I brought them into Nuke, along with the re-tracked plate – didn’t even bother with the point cloud this time, just got the geometry through another (fifth, or sixth, by now) camera projection. Aligned it all to the grid and exported as alembic. In Nuke, I placed all my other environments behind the door and animated them according to the app animation. I used cards and a random concrete texture to cover the gaps between them. Defocus with keyframes solved the focus pulling issues and oFlow got me the proper motion blur.

For Day 3 – Thursday – I refined my script, checked the tracking a thousand times and did all the roto work. This day went fast. I ran out of songs to listen while working, so I had to look for new stuff online. Doesn’t happen every day. So I rotoed (?!) the door out, and brought back the phone and my fingers when they went in front of the hole. The work on the door was awful, jumping as crazy, because there isn’t much stability and continuous motion when you are handheld walking while doing everything else at the same time. Each rotos had around 180 keyframes, for 250 frames of footage.

In Day 4 – Friday – my goal was connecting both plates, this was the cherry on top, because it wasn’t mandatory for the assignment, but I really wanted to do it, so I left it to the end.

What happens is: I have two 3d tracked plates, which means two cameras. First camera, inside my place, goes outside the door, which means that, from certain point, there is no more reference of the initial environment. The cut has to happen after this point. From the second plate, I needed to pick a frame not too close to the beginning, so I could transition from the first camera into the second – using a third camera and a keyframed parent constraint and keyframes for focal length because Nuke gave me slightly different numbers (12mm for one and 14mm for the other).

This part took half my morning, then I had to decide whether to fix the door’s roto or to add a portal effect that would benefit from the jaggedness of the mask. The principle behind the portal is the same as heatwaves. Noise that distorts what is around, affecting both foreground and background, changing constanly and waving around. I based it off the door’s alpha with good results. Had to do some keyframing to make it bigger near the end, fix colors and stuff. Then I noticed the portal was cutting into my hand as soon as the phone and fingers masks ended. More roto. Yay! Luckily, it wasn’t that much, and not so much movement either.

As soon as the portal looked good, I went back to the connection challenge.

I needed to decide exactly in which frame I was going back to live footage. Once that was picked, I had to camera project the very previous frame, using this final animated camera, and paint the gaps in the projection so it would fit between going out the door and turning into the other tracked camera. This was by far the most confusing part, in which I camera projected three different images that didn’t work, because I couldn’t figure out how to do it.

When I finally got it to work, all that was left was a crazy amount of painting and perspective to cover the huge amount of corridor that was untextured. Even harder, the painting had to be done UNDER the original texture, so the image wouldn’t jump on the frame it went to footage again. How do I know this? Of course, I painted it wrong the first time. The seamless paint took me way longer – like hours longer, getting to the point of painting single pixels that looked wrong. This was my Friday night.

Day 5 – Saturday – was light, I had to cleanup the markers outside the door, barely visible, and only for a couple frames, and figure out how to do breakdowns for this thing that almost melted my brain. Ended up oversimplifying because there was no way to explain all of this in a few seconds of video. Then, add sound effects, bring back lens distortion and a final grade.

Day-to-Day

Set Extension Rush.

February 11, 2015

Depois de passar mais de 8h no Domingo ajustando e modificando meu assignment de rigid bodies (leia-se: newton aplicado em 3D, com gravidade, impacto, massa, essas coisas legais todas), eu literalmente não vi o tempo passar enquando acertava câmera, render passes e morria pra processar o motion blur. Depois foram mais quatro horinhas de render e meia hora de comp pra poder chegar nesse resultado aí embaixo. Gostei um bocado, e adoro ficar vendo os tijolinhos voando loucamente quando a câmera passa no meio do caos.

Depois disso, descobri que tem um assignment de Set Extension pra semana que vem, que eu achei que tinha duas semanas pra fazer. A gente teve uma aula pra filmar coisas no estúdio, mas eu tava cheio de idéias e não consegui pensar em nada. Queria algo com portais, mindfuck, essas coisas. Depois de vários dias com o pensamento esquecido, pensando em como integrar camera projections e vídeos de forma imperceptível, transitando de um pro outro com cortes planejados, a idéia final veio nesse Domingo também. Segunda feira eu rabisquei umas questões técnicas, mas como tava sem câmera, não dava nem pra começar.

O plano era ter um app de celular que interage com o mundo, abrindo portais. Dois desafios já de cara: primeiro, tem toda uma interface gráfica e animações pro app. Resolvi fazer isso durante a tarde, na VFS, acometido de uma súbita saudade de motion graphics usando o After Effects. Nesse processo, também aprendi várias coisas sobre o sistema operacional do celular, fontes próprias, resolução, formatos de vídeo reconhecidos e tudo mais, até conseguir fazer a parada funcionar. Ficou meio pequeno, mas bonitinho.

Segundo desafio, tem que ser uma puta lente grande angular. A 24-105mm era muito fechada pro que eu queria, então resolvi testar mais processos usando a 8-15mm fisheye, que a gente usou pra fazer os HDRIs. O grande problema da fisheye é que ela tem uma distorção imensa, então não dá pra fazer Camera Projections com as imagens a menos que elas estejam “retilíneas”. Fiquei umas horinhas ontem testando com uma foto do corredor daqui de casa, e consegui fazer funcionar pra tirar a distorção e as coisas encaixarem direito no Maya. Próximo passo, descer pra todas as minhas locações de teletransporte e tirar fotos, tanto pras camera projections quanto para as imagens dentro do app.

Carreguei comigo uma trena, caderno e caneta, porque é fundamental que tudo esteja em escala real, senão na hora de colar tudo, dá uma merda fenomenal que leva uma vida pra arrumar. Mais rápido e fácil fazer certo desde o começo, ainda que leve uns minutinhos medindo coisas. Voltei pra casa, corrigi a distorção em todas as fotos, e por segurança já pulei no Maya pra camera projetar as coisas. Se não tivesse dando certo, voltava e tirava novas fotos. Fazer a última Camera Projection me levou uns três dias da última vez. Ontem eu fiz três em uma hora, e todas funcionando bem bonitinho. Tem umas coisas de textura pra pintar, mas as locações foram escolhidas pra facilitar minha vida – corredores e lugares cúbicos/quadrados. Tá rolando.




Hoje cedo rodei uns planos teste pra acertar o timing das animações dentro do app, testei usar luvas pra encostar no celular – o problema de a luva atrapalhar a sensibilidade da tela nesse caso foi uma grande vantagem, que me deixa apertar e mexer em mil coisas na tela sem os menus aparecerem por cima do meu vídeo. Bom, depois dos timings acertados, comecei a rodar as coisas, acho que tenho o que preciso, tô convertendo o material de RAW pra algo mais editável e trackeável. Ainda tem muitos passos pra dar certo.

Espero terminar no final de semana, porque a entrega é terça que vem, aí apareço aqui com as coisas. Me animei muito pra fazer porque deu pra colocar uma historinha no meio.

Hovering Lights

ML RAW Workflow.

February 3, 2015

This post has been sitting in the drafts folder for long before I shot, but since it was just recently that I was able to go through the workflow that I can really explain how things work. For my previous project (Zona SSP), we shot RAW, thanks to MagicLantern, using Canon’s 50D and 5D3. The results were terrific, but I can’t say it was easy dealing with the huge amount of files and their versions. Also, processing power was a must during color correction because each DNG file requires debayering before rendering, so, the final render took me some days.

This time, I wanted the increase in dynamic range from shooting RAW, but lighter files and less back and forth between conversions and small adjustments. While looking for decent information on this process, I came across this workflow at hackermovies.com which is quite interesting since it gets rid of the DNGs while keeping all the data. Since it was written in 2013, the steps are a bit outdated, so here’s a 2015 version of it.

First off, shoot your plates using the latest version of MagicLantern (for the 5D3, the latest version was August 27th, 2014). In ML’s menus, pick the MLV format because it allows you to record audio along with the pictures. Now you’ll have huge MLV files in the card, that you need to drop into your hard drive.

I always get rid of their own folders and keep just the files in a single folder per card. It works better with the following steps.

Now, there’s a lot of installing before we can take our files forward. You’ll need MLV Converter (v1.9.2 at this time) and the Converter requires a few things to be installed first.

1 – MLV RawViewer (1.3.3) – This is a quick playback software that doesn’t require installing and reads raw files real quick. Only works with MLV files, as the name says, and has some nice controls for Exposure and Whitebalance. MLV RawViewer already offers you conversion to DNG or proxy QuickTimes, and some other tools. At first, I was using it as my only step of the process, but there are some problems like:

a) the QuickTime files are HUGE. Very little compression, and follow whatever you have on the screen for exposure and white balance.
b) the output DNG files are regular DNGs and not Cinema DNGs (more on this right below)
c) accessing the program’s functions is a bit confusing and I kept going back to the download source to check what were the commands I needed.

2 – Adobe Cinema DNG Converter (8.7) – As said above, with this installed, MLV Converter will convert your MLV files into Cinema DNGs. These can be imported directly into DaVinci Resolve and Adobe Premiere, for a faster editing workflow if you don’t want any proxies in your way. For my workflow, the DNGs were not the final files, so they end up deleted anyway.

3 – IrfanView and its plugins – MLV Converter needs IrfanView to generate the thumbnails and help you deciding which clips should be converted and which ones should be skipped. Well, I don’t care about thumbnails because I’m not shooting randomly. If I shot a clip, there’s a chance it will be useful, so I end up converting all of them into proxies right off the bat.

After you install everything, load MLV Converter and navigate to the folder with your MLV files. Then you set the DNG folder and the proxies folder. At this point, I didn’t need the DNGs, just the proxies, for editing. So, that’s what I rendered out.

Then, using these files I went into Adobe Premiere and edited the crap out of all my takes, getting to what I think would be the “final cut”. From this I knew exactly which takes and clips I would need. Wrote them all down in a piece of paper and went back to MLV Converter. Now I didn’t want proxies, and also didn’t want to convert every single clip. I just checked the ones I needed in my edit in order to get them into post, and this time, as Cinema DNGs.

Extracting the image files from the MLV container can take a while, so let it think for a night (or class).

Ok, now you lost a huge amount of storage with all these files and conversions. From this point, I backed up the MLV files into an external hard drive, just for safety, and deleted them from my main hard drive. The same goes for all the unused proxies. If you think they’re OK to keep, it might be useful in case you need to change something in your edit. The DNGs are also massive folders, but we’re gonna get rid of them in a bit.

The next step is converting the DNGs into LOG files and encoding them using AVID’s DNxHD codec because it stores 10bit frames and has a very nice ratio between file size and image quality. This will all be done in Adobe After Effects, and requires a couple things too. Everything is still free, except for After Effects.

1 – Adobe’s CameraRaw (7.1) – Easy and quick, just install. It allows you to open RAW files, like the CDNG, Canon’s CR2, Nikon NEF and many others. It’s automatically installed when you have Photoshop in your computer, you might just need to update.

2 – Vision Color’s VisionLOG Image Profile for CameraRaw – This is the key to deleting the DNGs. When importing the sequences into After Effects, CameraRaw will pop up. Go to the camera tab and from the profiles dropdown menu, pick VisionLOG. It will squeeze almost (if not) all information from the RAW file into a regular 8bit image that can be expanded in post through grading and color correction. Do tweaks in any settings you think you might want to improve before clicking ok. You’ll need to do this step for every DNG sequence you convert.

3 – AVID’s DNxHD Codec (2.3.7) – Just download and install. Inside After Effects, create an Output Template with the settings you need (frame rate, bit depth and so forth). If you’re not sure what to do while setting the template, go back to the hackermovies.com tutorial that they explain every detail of this process.

For exporting, be sure to import your audio files for each MLV as well, and then you can follow their path for rendering out automatically using a script. I had just a couple clips (less than 20 total), so I did this all by hand. The render from DNG to DNxHD also takes a lot of time, and you might have trouble if you didn’t shoot with standard 1080p resolution (I shot at 1728x1290px), check this at hackermovies too if necessary.

At this point you’ll have bunch of flat-ugly-looking clips that will make your director/producer worried if they see them. In Nuke I used the VectorField node, combined with the 3D LUT provided by Vision-Color to bring everything back to a “standard” look and work with this for previewing instead of the LOG clips. If you don’t have any idea of what a LUT is or does, check this and be happier. I’m still having a little trouble at this step – because I’m not 100% sure of my input and output colorspaces – but I’m getting there.

If you’re not going into Nuke, I know Premiere has an effect for LUTs (Lumetri) and After Effects CC might have something internal as well. If it doesn’t, just get LUT Buddy for free at Red Giant.

Of course, you can use dozens of other LUTs instead of just bringing the LOG images back to REC.709, but that’s up to you. I’ll just say that Vision-Color has some amazing options with ImpulZ.

After you finish the conversion of the DNGs into DNxHD, you can delete them all and your hard drive will feel relieved. If you’re brave, you can also get rid of all those QuickTime proxies and replace them with the DNxHDs plus LUTs. Chances are the DNxHD files are around 40% lighter than the proxies, which also gives you some more free storage. I wish I had some pictures to make this post look better, but I don’t want to give any spoilers away more than I already did! If you try following these steps and get stuck, feel free to leave a comment or message me that I’ll be glad to help!

Hovering Lights Specials

Diegetic Cinematography.

February 1, 2015

The second part of the previous post, and this is where things should start to get at least a little more interesting.

Working on my demo reel for Vancouver Film School, I decided to, besides all the VFX stuff and technical aspects of it, try some new stuff with cinematography, experimenting with a style that always gets my attention for it brings together a series of elements I believe work amazingly in terms of immersion and getting the audience into the story head on. That is diegetic cinematography, is making the camera a part of the characters’ world. It’s an object that they see, use and interact, which is also used to tell the story. This has some immediate consequences that aren’t standard through film history like “there is no fourth wall”, the characters know they’re being filmed, they interact with the camera but just because they have a relationship with the one holding it.

We’ve seen it before several times among sci-fi – Chronicle (2012), Cloverfield (2008), Project X (2012), Project Almanac (2014) – and the whole genre of “Found Footage” horror – Blair Witch Project (1999), Paranormal Activity (2007), V/H/S (2012) or [REC] (2007) among many others. I won’t focus on the horror movies at this point. There are huge articles about the Found Footage genre, and I’m no expert, but I’d like to discuss what this kind of camera work brings to the story: first of all, the audience knows exactly as much as the characters. Hitchcock says the key to tension is giving key information to the audience, that the characters don’t know about. Like whenever we know the killer is at the victim’s house way before the crime takes place in the screen. We – as audience – worry because we foresee what’s going to happen and it’s the wait that causes the thrill.

When the camera is a character, if the audience knows something, so do the characters, and here the thrill comes from the fact that we have the suspicion that something bad might happen or WILL happen, but we don’t know exactly what, when, or to whom. Whenever it hits, we’ll be as surprised as they are, thinking of ways out the same way they’re doing. For me, this is a boost in terms of imersion and also a challenge. Since we’re so close to the characters, whenever they act in really stupid ways we’re thrown out of the movie, they’re not convincing anymore. Like any horror movie, when people go to “check their basement when the lights go off”, or think it’s “a good idea to face the bad guys breaking into their homes”. Regular people like you and me would never do these things, I don’t have a hero complex, if I think it might be dangerous, I’ll flee or hide!

While reading on this subject to see what other people think, I came across a very small number or articles, none of them really deep, and with very different opinions, so it’s time to make clear that I’m not defending that the viewer is a character in the movie, since we see through a character’s camera. A movie is much different from a game, all the choices have been made from the start. There’s no interactivity, I’m not saying we’re seeing through the eyes of a character. I think first person videos are awesome, but I wouldn’t say that is diegetic cinematography because there is no camera. We act different when we’re just talking to someone versus when we’re on tape.

Using the argument of “we remember things” as a comparison to recording is not valid. We haven’t got to a point where people have cameras embedded into their eyes yet. The proof that people act weirdly on camera, even if the camera is someone else’s eye is all the hard times Dr. Steve Mann goes through. There’s also a great Black Mirror episode (The Entire History of You, 2011) about implants that turn our eyes into cameras, but that’s still sci-fi, and it takes place somewhere in the future.

Following this line of cinematography, Christopher Campbell wrote an article discussing why that is weird, bad in a standard way, but interesting if done properly since it’s different from anything we’ve seen so far. He specifically talks about Hardcore, currently in post, made by the same guys who did the Biting Elbows – Bad Motherfucker music video, which is shot entirely from the protagonist’s point of view (POV). Campbell makes a comparison between literature, games and movies, establishing a clear difference between books written in first person and movies in first person.

So, my definition of diegetic cinematography requires a physical camera being held by one of the characters. When that happens, the camera usually has a specific purpose inside the film itself. In Project Almanac they’re documenting their progress through an experiment, in Cloverfield, the guy is in charge of filming the farewell party for one of the main characters. Project X and Chronicle, though, have very different approaches that make sense in today’s culture. The title says it all, Chronicle, “a historical account of events arranged in order of time usually without analysis or interpretation”. There’s no manipulation of time, the editing just goes forward. We don’t see the same moment twice, we don’t have flashbacks or forwards. Phones can shoot video, we have a plethora of social networks based around video, or that support to video uploads (YouTube, Vimeo, Facebook, WhatsApp, Snapchat, Vine, Instagram and so forth). We take way more pictures in our everyday life just by having a half-decent camera in our phones. We don’t worry that much about framing, image stabilization and such for our videos. These are just small chunks of memories, shot in chronological order. They’re usually more important to ourselves than to others. Sure we share them, our current culture revolves around showing where we’ve been and who we met, all very much time stamped.

Nikon just released a whole campaign based around that, calling the current generation “Generation Image“. Not so long ago we had (we still have some) very long discussions regarding what “qualifies” a photographer. Are amateur photographs taken with a phone camera as valid as the ones taken by someone who studied the craft for years and use expensive gear with the single purpose of taking photographs? If we’re talking about interviews and scheduled events, sure, that is debatable, but what about natural disasters, conflict areas and other situations where stuff just happened, and when the Professional gets there, the event is already past? What tells better the story of a gas explosion inside a shopping mall: a high-megapixel count photograph of some ruins with sharp focus hours after the event or one taken in the food court with a phone, at the time of the explosion, all blurry, but good enough to understand what’s happening? Nightcrawler (2014) is a great movie kind of related to this subject, with lots of camera on the screen, but no main diegetic cinematography.

John Powers, when writing about Chronicle, has an interesting point when he says this shift between traditional media/cinematography and amateur recordings began back in 2001, with the attack at the World Trade Center. While most media networks were running towards the area to shoot their own footage, thousands of people around the buildings were already doing this on their own, simply because they could. I’ll come back to John’s review later on.

It’s not hard to tell the difference between a professionally shot video and one done by someone who had the sole purpose of recording the events. Actually, it’s quite easy to spot which is the Pro and which is the amateur. Then Hollywood comes in and turns the “amateur look” into a style. What are the benefits?

First off, it has a much more “real” look, as if that isn’t a movie, carefully written, planned and executed. We relate to the characters because that’s the way we’d film if we were in that situation. The handheld, shaky camera, also called “documentary camera” has this name because documentaries usually have small budgets and it’s focused on reality. Real people, real lives, real intentions, no actors. When the first portable camera came out, documentaries blossomed. After some time, what was considered a flaw – the shakiness and curiosity of documentaries cinematography – was brought into fiction with mockumentaries, behind the scenes that seem as amazing as the real scenes and much more.

Steve Bryant, in his article about the camera becoming a character, says this is a negative thing. What should work as a bridge between audience and show actually sets them further apart because there’s now two layers of fiction (behind the scenes AND the show) instead of just one (the show) and we don’t notice that. We feel closer to the actors, we think we know the ones behind the characters, when what’s really happening is the actors are acting about themselves as well. It’s confusing, but makes a lot of sense in the end. How does this relate to the diegetic cinematography thing? Well, the show wouldn’t count as diegetic cinematography, but the behind the scenes would, since many times the camera operators are just as real as their subjects.

Ok, so we got a reality and empathy bonus because that’s how the audience would film. I also believe this is much easier for the actors, because it’s much more natural. The hard part is making it feel right. Once you know how to handle a camera, professionally, it’s easy to make a mess and say it’s amateur. The key is to know how much messier it should look, which are the main points and reactions the audience has to see and which ones are best when just outside the frame. How close to danger are our characters willing to go? What’s their relationship with the person handling the camera? Do they care, are they pissed about it? Are they filming just to keep a record of events, or for sharing? And the ever present question “what’s more important now, the camera or whatever’s happening at the same time?”, this will mostly dictate framing and might have influence on editing as well.

There’s one issue, though. One downside that’s very hard to avoid: sequences of shitty, blurry, shaky images while our characters run. For many people, these are huge turn-offs. They feel sick, dizzy or worse. I myself have a high threshold for shakiness, but every once in a while I see something so confusing that makes me question the process.

This is what I’ve been experimenting recently to tell my story loaded with visual effects. I’m wondering which side will win: the reality from the cinematography, or the out-of-this-world aspect of the visual effects involved. I’m also gonna question the editing process, but this post is already too long and confusing to include that!