It’s been a little while since I decided to make video reviews for the anamorphic lenses I have. Ever since then I have thought a lot about the format and “rules” to be followed so the videos could be compared to each other and I don’t end up doing a lot of subjective work (since these are lenses that can be held against each other and some have clear advantages over others). I also didn’t want to make boring endless charts and stuff like that, because it’s hard to watch and keep focused if there’s nothing interesting happening at the screen. I say that because I’m terrible at watching lens tests whenever they’re too boring, I want to convey the feeling of what can be achieved with the lens, but I also gotta set myself some limits: last time I thought of making a “test video” I ended up doing a full on webseries pilot followed by a 100-page essay, so gotta tone down the creativity a little.
This post describes what I got so far and I’d love to hear from you what you think might work, what might not and other interesting things to get in the videos.
First of all, everything will be shot with a Canon 5D3. That’s the camera I have and I don’t plan on buying any other soon, which means full frame. Good thing is, from there you can easily convert into smaller sensor sizes and figure out what is or isn’t covered for different cameras. If anyone wants to give me another camera for the tests, free of charge, I have no problem with that! Hahahah!
For taking lenses, again, I’m not going far, using what I have, which is also a standard prime set. Mir 1B, Helios 44, Jupiter 9 and Tair 11 (they translate to 37mm f/2.8, 58mm f/2, 85mm f/2 and 135mm f/2.8). They’re all russian glass and have been known to work well with anamorphics. I wish I had a more “modern” set, such as Contax/Zeiss, to compare both vintage and modern looks, but I’m not spending any money on this.
Whenever I’m filming the lenses to show build quality, how they work or anything like that, I’ll be using Canon’s native H.264 codec, for there is no need to spend any more bytes with that. For the actual technical testing (charts and stuff), I’m shooting RAW with Magic Lantern, at 1080p resolution with no post processing other than VisionLog camera profile, so the footage is as flat as can be. We want to see the maximum amount of detail the footage can hold, so no grading on this part, no contrast, no nothing. Plain log. Videos will be uploaded to both Youtube and Vimeo, so users can download them and check for finer detail without internet’s compression.
The technical aspects that will be analyzed are build quality, sensor coverage (both full sensor and a standard 2.4:1, Cinemascope, crop), current price and availability, sharpness (at f/2, 2.8. 4 and 8, comparing corners and center). Sharpness tests will be done with charts in the first part of the video. I’m also gonna test them with the diopters I have here. Minolta +0.4, Iscorama +0.5, Fujinon +1.25 and Canon +2, all achromatic doublets that should improve the lenses’ performance. I’ll always comment on the focusing method for each lens, since there are several different ways of doing it, and people are always confused about it. Flares will also be tested, using a regular smartphone flashlight.
The output footage will be unsqueezed by REDUCING THE HEIGHT instead of increasing the width, which holds more detail and is the most common process to properly unsqueeze footage. This will lead to black bars above and below the frame. On these areas I’ll put all the technical information I can about each shot (f-stop, ISO, shutter speed, taking lens, anamorphot, diopter, white balance) so anyone can quickly see what changed from shot to shot.
After all these techcnical stuff, which shouldn’t play for too long, there’s a “real world test”, which consists of 10 handheld specific shots, 5 “well lit” and 5 at low light, consisting of a close up shot, medium shot, infinity focus, a rack focus and one extra that I haven’t decided yet. These will also be shot RAW as the charts, but will be presented graded. I’d love to tell short stories with them, but I’m not sure it’ll be possible.
After all this craziness, I plan on sharing a couple DNG frames from both charts and real-world tests so anyone can push them to any limits they like or look at each individual pixel, hell, I don’t know, not my problem!
So, is it kind of clear or confusing? Am I missing something? Is there anything else you’d like to see?
I’m also thinking of shooting a short video explaining all of this, instead of having it just written here.
Ontem eu tava rabiscando umas teorias de como terminar um assignment, planejando meu curso de ação assim que o computador terminasse de renderizar uns frames, quando me ocorreu um pensamento peculiar: “se eu tivesse que explicar esse assignment pra meus pais, o que eu diria?”, e foi aí que me toquei que é difícil (quase impossível) conversar sobre as coisas que eu tô estudando aqui com qualquer pessoa que não tenha passado pelo mesmo processo ou trabalhe na mesma área. O resto do mundo que se foda, mas fiquei um pouco incomodado com o fato de meus pais não terem muita noção do que eu faço, e o motivo principal é: tudo que eu faço aqui tem como objetivo parecer o mais real possível, quando na verdade não é. O problema é que quando o resultado funciona, não dá pra ter idéia do caminho percorrido. Tem os breakdowns, onde a gente mostra umas partes do processo, mas é algo mais técnico, e menos “olha só o que eu fiz!!”. Bem diferente de uma profissão mais “tradicional”, digamos, onde as coisas que você faz são visíveis e palpáveis.
Espero que esse post não se enrole muito, porque vou tentar definir o que é que fica me apertando o juízo nessa história.
Quando fui fazer audiovisual, não tinha comparação. Você começa com nada, e acaba com um filme, você fez aquele filme, provavelmente não todas as funções, mas dá pra descrever. Trabalhando como câmera, todo mundo sabe o que é uma câmera, lentes, trilhos, se pendurar nas coisas, fazer gambiarra com luz, porque isso tudo faz parte do dia-a-dia da vida, não só do set. No set a gente só “reutiliza” as coisas da vida. Até no processo do TCC, a revisão passava por minha mãe, porque se ela conseguisse acompanhar a leitura, era algo acessível pra quem tivesse interesse, mesmo que pouco ou nenhum conhecimento prévio. Esse era o objetivo do TCC, disseminar um conhecimento que me tomou um puta tempo, de uma forma simples, pra possibilitar o acesso a mais pessoas que não tinham o tempo ou os recursos que tive durante a pesquisa. Em várias ocasiões eu falei, e continuo repetindo, se dá pra colocar na internet, de graça, esse é meu caminho preferido. Por que? Porque assim você não limita o acesso, e sim expande, é de graça, qualquer um pode ter, tá na internet, qualquer um pode acessar, não dá pra ser muito mais livre que isso.
Agora voltando pra VFS e traçando um paralelo com AV. Em AV eu tava por trás das câmeras. Em VFX, eu tô por trás de quem tá atrás das câmeras. São mais camadas de “ficção disfarçada de realidade” pra chegar no meu trabalho. É fácil saber que as pessoas no filme são atores e não os personagens em si, é fácil saber que a luz é manipulada por alguém, assim como o som e todos os ambientes que aparecem na tela. Já é mais difícil saber se em determinado plano o jardim do lado de fora da casa – desfocado e sem objetivo narrativo óbvio – é real ou foi colocado ali digitalmente. É difícil saber quando uma tatuagem sumiu, um microfone foi apagado, uma PESSOA INTEIRA foi apagada do plano, se X e Y objetos são reais ou digitais. E um dos problemas nessa história é: se o público consegue perceber, é porque o trabalho devia ter sido feito com mais cuidado.
Ok, espaçonaves no céu, algumas explosões, prédios futuristas e essas coisas são mais fáceis de dizer “Ah! Digital!”, porque a gente sabe que isso não existe nos dias atuais, mas é tudo tão baseado na realidade que mesmo os maiores absurdos se costuram com a realidade e ninguém fica reparando na textura-do-vidro-daquele-arranha-céu-que-não-existe-no-mundo-real-mas-existe-no-filme, porque não é esse o objetivo mesmo. E pior, algumas delas podem ser de verdade, construídas para o filme, no mundo real.
Argh, tô sentindo que tô perdendo o ponto da discussão.
Tenho certeza que todo trabalho tem suas coisas mais técnicas e complicadas, que só dá pra conversar de verdade com outras pessoas da sua profissão, mas também tem elementos simples, que dá pra explicar pra qualquer um em cinco ou dez minutos. O que eu tô procurando agora é esse elemento no meu trabalho. De forma objetiva, a pergunta é “Como montar uma frase compreensível, mas não supérflua, sobre o que faço, sem termos técnicos ou glamour exacerbado?”. Porque eu posso virar e falar “ah, eu faço ilusões”, mas, pelo amor de deus, né? Não vamos forçar a barra – mesmo porque isso é tema pra outro post.
Bom, vou terminar esse por aqui, e se alguém pensar na frase, por favor, poste nos comentários. Enquanto isso, vou pensando por aqui, e tocando o demo reel pra frente.
- Acho que o fato de aprender tudo em outra língua colabora com a dificuldade desse quebra-cabeças.
Depois do post preguiça do break, o ritmo ainda não mudou por aqui, apesar de eu ter a nítida sensação de que deveria estar mais preocupado com o andar da carruagem. Tô fazendo tudo com calma, talvez até demais, e dando passos pequenos, dando mais importância pro que me dá vontade de fazer do que pro que eu tenho a obrigação de fazer. Mesmo assim, tô em dia com as coisas da escola, até adiantado em alguns assignments e o demo reel tá andando com alguma estabilidade.
Estamos cada vez mais perto da primavera – o horário de verão começou hoje de madrugada – e tem mais de quinze dias que não vemos um dia nublado, só Sol, o tempo todo. Os dias nascem mais cedo e terminam mais tarde também, o que é uma maravilha porque o Sol dá uma animada na vida inteira. Ainda tá frio, no geral, um frio dos infernos (mínima de 1 máxima de 12), mas há de melhorar nas próximas semanas.
Comprei uma lente – uma que tive enquanto morava em São Paulo – linda e maravilhosa (50mm f/1.2), por dois terços do preço original, e tá perfeita. Eu e a May estamos planejando fazer umas fotos, inspirados pelo álbum novo dos Punch Brothers, Phosphorescent Blues, que estamos ouvindo loucamente – apesar de as músicas estarem aqui há quase dois meses, só começamos a ouvir nessa semana. Tem uma entrevista muito legal onde um crítico fala de coisas por trás das letras e do feeling geral da produção, que foi o que deu o estalo pra fazer as fotos: o disco todo, apesar de ter uma pegada bluegrass, tem letras modernas e fala/critica de um jeito muito elegante a nossa dependência de celulares, computadores, redes sociais e essas coisas todas que nos viciam em frente às telas. As fotos vão ter bem pouca luz, e por isso a lente tem ainda mais valor nessa brincadeira.
Além dessas coisas, tenho jogado muito Dead State – do post anterior – mas acho que cheguei num ponto de virada – na vida, não no jogo – porque ontem foi o primeiro em… quinze? dias que passei sem jogar nada, só fazendo coisas do reel e assignments, e me divertindo um bocado no processo. Estamos vendo um monte de séries também, e cinema voltou ao menu.
Durante o break – ou logo antes dele, não lembro – fomos ver What We Do In The Shadows, um mockumentary sobre vampiros nos dias atuais. Vale muito a pena, não é um filme longo, e a gente riu um bocado porque as piadas são muito inteligentes. Lado mais positivo: já tem torrent por aí.
Bom, esse post tá com muita cara de diário, então vou parar por aqui, porque sim. Acho que vou tentar escrever mais um, sobre um tema menos mundano.
Já tem bem uns dois meses que eu tava querendo trocar de temporada aqui, e nada de idéias pro título ou pros banners. Acabei indo com um estilo mais minimalista em comparação com as últimas, mas acho que combina com meu estilo mais recente.
10ª Temporada: Hovering Lights
Passei dias (semanas) sem a menor inspiração pra escrever nada por aqui, sem temas e coragem, e hoje que tenho um gazilhão de coisas pra fazer, me vem quatro posts quase prontos na cabeça. O ritmo tá meio estranho porque tenho escrito mais em inglês do que português, mas vamos tentar. Bom, esse aqui era só pra registar a mudança de temporada, então o serviço tá feito. Já sinto falta da May nos banners!
On Term 4′s last week of classes, during the break and until now, I’ve been playing Dead State a lot. A. LOT. Before I start talking about the game itself, a quick reminder from past experiences: I’m kind of traumatized by Fallout 2. It was a turn based game – an awesome one – that I never managed to finish, even though I tried several times along many years. I got very close to the end once, and the computer simply fried, burning my save files along with the motherboard and hard drive.
The main point mentioning Fallout 2 is because I felt many similarities between both games and this might have affected my addiction to Dead State.
First off, I bought it when it was still in Early Access on Steam, with only 7 playable days and limited locations. It’s a zombie game different from the standard go, kill, keep moving, first person shooter, gore, save the world. It’s more like an elaborate RPG, where you play as a survivor from a plane crash in the middle of the zombie apocalypse. From there, you are brought into a school and the best chance for survival is fortifying the place and going out to gather supplies. The school can be upgraded in various ways – garage, workshop, chicken coop, generator, fences, and so forth. You also have a limit of four people in your raiding party, which means that most of the time someone is gonna be left at the school. From the job board you can assign everyone’s tasks and see their progress as the hours and days go by.
Dead State has many variables, and this is one of the game’s strongest aspects. The group’s morale is based on how you’re doing gathering food for everyone, medicine for the ones infected, fuel for the generator, keeping the place working (broken fridges, toilets, damages to the fence). As you explore the school’s surroundings you end up finding other survivors like yourself. From this point you can have different approaches, trying to gather as many people as you can find to improve the shelter and offer them safety and food – which makes it harder to keep the food and fuel supply good – or keep a small number of loyal people, that will never question your decisions. And, boy, there are lots of decisions to be taken. Every once in a while another survivor comes up to you to ask for a day off because they’re sick, tired, not feeling well or something like that and you have to decide if you can go one day without their work or if you need them on point that day. There are also crisis events where more complex situations arise and you must take decisions that will not please everyone in the shelter – once I had to decide between cleaning our water supply or fortifying the fence. I didn’t have enough spare parts to clean the well which ended up poisoning more than half the crew, rendering them useless for a couple of days. In these crisis meetings there are key characters that have their ally base, like politics and so, keeping majority on your side is always a good thing.
There are also conflicts between different survivors because their interests are totally different and it’s up to you to decide what to do to solve their problems.
Combat mechanics are a little weird at first, because it’s a turn-based game, so you do your thing and wait/watch your enemies react/counter. In the beginning each battle takes hours to play through. After some days you’ll get better gear and improve the group’s stats, which speeds up the undead killings. Then you start to meet other looters, gangs, mercs and soldiers just to make your life harder again. I’m still in day 40 of the infection and you simply can’t take a day off. If you don’t go out scavenging, food might not be enough, the generator runs out of fuel and that kind of thing. Once I had to keep exploring through the night – much more dangerous due to the increased number of undead and also harder to see and strike – because I didn’t have enough antibiotics for everyone back home. After midnight, the game considers your party isn’t going to get enough rest, so everyone has a fatigue penalty during the following day.
Every once in a while I think the game is becoming repetitive and then it surprises me with hardcore enemies, or allies asking for very specific items in hard-to-access areas – like a tattoo gun, medicine books, guitar strings and that kind of thing. If you ignore an ally request for too long, they also get pissed and respect you less. They have their own “wanted” items that improves their mood, which also affects the shelter’s overall morale, so whenever you come across specific items on the field like cigars, deodorant, chocolates, coffee, rechargeable batteries and such, it’s better to get a hold of them than food itself, just because they’re harder to find.
More on exploring, at first everything is on foot, you can only walk to places and this takes a long time. After a while, if you rescue the right ally, she says there’s a horse farm nearby and you can raid the place to get some horses for the group. Of course, the horses require feeding every day. After some more time, you’ll find a mechanic that can fix the car in the school’s yard and that moves even faster besides providing you with a sizeable trunk to carry more loot back home. It uses fuel, so it’s always good to take that into account before going out with it – and if you run out of fuel in the field, you’ll have to send someone out there to rescue the car, which also takes some time. Each character’s weight capacity is determined by their strength attribute, which also affects their melee damage.
I started writing with a very clear idea in mind and I totally lost it. Overall, if you played Fallout 1 or 2, and liked it, give Dead State a chance. It still has some minor bugs and glitches but, overall, it’s an inovative take on the zombie genre that’s is kind of getting exhausted lately.
Tivemos nosso break entre Terms nesse fim de semana. Vários planos, andar de bike, passear, ir ao cinema, e por aí vai. No fim, resolvemos ficar em casa direto, saindo só pra comprar comida. Passar esse tempo sem obrigações e coisas pra fazer, só com a May, foi uma maravilha. Assistimos um monte de coisa juntos, fizemos bolo, jogamos videogame, várias atividades domésticas. A preguiça venceu e por quatro dias eu nem lembrei de nada de demo reel.
Esse post é só pra dizer que eu ainda tô vivo, e tenho umas coisas pra escrever, mas a preguiça persiste. Nos vemos em breve!
During last week, I spent most of my days working in one assignment instead of making real progress with my reel. Why the heck did I do that? Well, I have to go back to last Sunday, while I did the rigid bodies assignment, linked on the previous post. During render time, while Maya wrote out all my frames and passes, an idea came to me. From the Set Extension class, I realized that this could be a very mindfucking assignment and I really wanted to play with portals. Have things crossing walls, or looking forward and seeing myself from up top, weird stuff like that. I couldn’t focus in class and didn’t take any of the plates we shot at the studio downstairs.
Then, the portal thing finally clicked, and I thought it would be super cool if I could go through it, using a phone app to choose my destination. I scribbled some notes of what had to be perfect, ideas of how to shoot it and work it out, avoiding as much complications as I could, from the beginning. For starters, I would love a very wide angle, first person look, with the phone on the screen, so I could show the interaction between the app and the real world, several different destinations, and finally, the one I’d cross to.
I let the idea grow for one more day and got to work on Tuesday, because I got really inspired to see if 1) I could get it done, 2) I could get it done IN TIME (which meant before tonight). I finished yesterday, and you can see the result below. After the video, I’m going really crazy and try to explain how/what I did during the process, because these breakdowns aren’t nearly enough to explain the mindbending I went through.
First of all, I thought about having a greenscreen and markers on the phone and replacing that in post, but I already had too much work on, so it was easier and faster to design and animate everything that would happen in the app in After Effects and just play it while recording my plates. In this process I had to find out android’s favorite video formats, screen resolution, how to output it from After Effects and, later on, add a guide soundtrack so I knew what animation was going to happen and when, in order to time my actions on the plate.
This took my Tuesday afternoon. When I got home I grabbed the camera and took a couple test shots of the corridor – where all the action takes place – because I wanted to go really wide angle, and my only available option was Canon’s 8-15mm fisheye lens. It has an amazing field of view, but the downside is that everything comes out fisheye-distorted, which means it’s impossible to camera project anything – which is key to my matchmoving and all the environments changing at the back.
With these distorted shots I tested Nuke’s LensDistortion node, which gave a kind-of-ok result, but really messing up with the corners of the frame. Also, it was impossible to bring it back to the original image – couldn’t figure out why. It creates a weird circle with black background and everything punched inside. anyway. For my plates, this could work. For the camera projections, not so good. I went then to see if there were any good “defishing” techniques using Photoshop and got great results with their custom presets for the Lens Correction filter. It requires you to install some extra free packages with Adobe Air, but it’s very quick and simple. In there, sometimes the 8-15mm showed up on the list, sometimes not, so I picked the regular 15mm fisheye along with a full frame sensor and the results were very very interesting. With these, I went into Maya and quickly buid some cubes to see if it was camera “projectable”. It worked.
Green light for the shooting then.
I started with the pictures for all the environments. Set the tripod at the same height and level for all the pictures, lowered ISO, closed aperture and selected a very low shutter speed so I could get as much depth detail and as little noise as possible. Not a hard achievement using the 8-15mm. Then I stuck a tape crosshair at my chest at the aproximate tripod height, so things wouldn’t look too weird when combined. I chose environments that would be very simple to build (hallways, which are pretty much cubes) and with lots of depth to them. There are some examples below.
Before shooting the plates I also took some extra pictures to help me building those environments and filling the holes in the projections. Measurements were also very important to speed up the process and make sure everything would match completely. All the corridors have about the same width, so that would also be a good way of spotting any weirdness that could come up.
I shot my first plate a couple times to get the timing right, focus pulls and camera movement. I tried to avoid covering the door with the phone as much as possible – to avoid roto – but that wasn’t so successful. Then I went down to the Sub Basement and shot my second plate using the same technique. I was worried about how to link them together – no idea at all, at the time of shooting – and if they would track properly.
To help me with the timing for the phone animation in the second shot, I have the animation playing for ten seconds and myself counting up. When it reaches ten, the glitches appear and the screen goes black. I needed to do all my transition and regret in around seven seconds, then look down, click the phone just to see it die and turn to the elevator. It was more complex than it looked.
After shooting everything, I was still worried about this fisheye look, so I undistorted all my environments and tried to project them. Got three out of four done in less than an hour. The laundry room seemed to have a lot more detail, it was already past 10pm and I should get some rest.
Day 2 – Wednesday – was a nightmare. Tracking worked well with the Nuke-undistorted plates, but whenever I tried to export the mesh created from the point cloud, it would crash, die, burn and so forth. Tracking took more than one and a half hour! In the meanwhile I tried to figure the logic behind crossing between plates and finally nailed what should happen. I didn’t get it to work on this day, though.
I finished all my projections while the tracking went bonkers, and by the end of the day I decided to ditch Nuke’s undistortion and go with Photoshop’s for the plates as well. I kept the node because I wanted to bring the fisheye look back into the final result, and that was achievable, with some cropping.
With all my projections done in Maya, I brought them into Nuke, along with the re-tracked plate – didn’t even bother with the point cloud this time, just got the geometry through another (fifth, or sixth, by now) camera projection. Aligned it all to the grid and exported as alembic. In Nuke, I placed all my other environments behind the door and animated them according to the app animation. I used cards and a random concrete texture to cover the gaps between them. Defocus with keyframes solved the focus pulling issues and oFlow got me the proper motion blur.
For Day 3 – Thursday – I refined my script, checked the tracking a thousand times and did all the roto work. This day went fast. I ran out of songs to listen while working, so I had to look for new stuff online. Doesn’t happen every day. So I rotoed (?!) the door out, and brought back the phone and my fingers when they went in front of the hole. The work on the door was awful, jumping as crazy, because there isn’t much stability and continuous motion when you are handheld walking while doing everything else at the same time. Each rotos had around 180 keyframes, for 250 frames of footage.
In Day 4 – Friday – my goal was connecting both plates, this was the cherry on top, because it wasn’t mandatory for the assignment, but I really wanted to do it, so I left it to the end.
What happens is: I have two 3d tracked plates, which means two cameras. First camera, inside my place, goes outside the door, which means that, from certain point, there is no more reference of the initial environment. The cut has to happen after this point. From the second plate, I needed to pick a frame not too close to the beginning, so I could transition from the first camera into the second – using a third camera and a keyframed parent constraint and keyframes for focal length because Nuke gave me slightly different numbers (12mm for one and 14mm for the other).
This part took half my morning, then I had to decide whether to fix the door’s roto or to add a portal effect that would benefit from the jaggedness of the mask. The principle behind the portal is the same as heatwaves. Noise that distorts what is around, affecting both foreground and background, changing constanly and waving around. I based it off the door’s alpha with good results. Had to do some keyframing to make it bigger near the end, fix colors and stuff. Then I noticed the portal was cutting into my hand as soon as the phone and fingers masks ended. More roto. Yay! Luckily, it wasn’t that much, and not so much movement either.
As soon as the portal looked good, I went back to the connection challenge.
I needed to decide exactly in which frame I was going back to live footage. Once that was picked, I had to camera project the very previous frame, using this final animated camera, and paint the gaps in the projection so it would fit between going out the door and turning into the other tracked camera. This was by far the most confusing part, in which I camera projected three different images that didn’t work, because I couldn’t figure out how to do it.
When I finally got it to work, all that was left was a crazy amount of painting and perspective to cover the huge amount of corridor that was untextured. Even harder, the painting had to be done UNDER the original texture, so the image wouldn’t jump on the frame it went to footage again. How do I know this? Of course, I painted it wrong the first time. The seamless paint took me way longer – like hours longer, getting to the point of painting single pixels that looked wrong. This was my Friday night.
Day 5 – Saturday – was light, I had to cleanup the markers outside the door, barely visible, and only for a couple frames, and figure out how to do breakdowns for this thing that almost melted my brain. Ended up oversimplifying because there was no way to explain all of this in a few seconds of video. Then, add sound effects, bring back lens distortion and a final grade.
Depois de passar mais de 8h no Domingo ajustando e modificando meu assignment de rigid bodies (leia-se: newton aplicado em 3D, com gravidade, impacto, massa, essas coisas legais todas), eu literalmente não vi o tempo passar enquando acertava câmera, render passes e morria pra processar o motion blur. Depois foram mais quatro horinhas de render e meia hora de comp pra poder chegar nesse resultado aí embaixo. Gostei um bocado, e adoro ficar vendo os tijolinhos voando loucamente quando a câmera passa no meio do caos.
Depois disso, descobri que tem um assignment de Set Extension pra semana que vem, que eu achei que tinha duas semanas pra fazer. A gente teve uma aula pra filmar coisas no estúdio, mas eu tava cheio de idéias e não consegui pensar em nada. Queria algo com portais, mindfuck, essas coisas. Depois de vários dias com o pensamento esquecido, pensando em como integrar camera projections e vídeos de forma imperceptível, transitando de um pro outro com cortes planejados, a idéia final veio nesse Domingo também. Segunda feira eu rabisquei umas questões técnicas, mas como tava sem câmera, não dava nem pra começar.
O plano era ter um app de celular que interage com o mundo, abrindo portais. Dois desafios já de cara: primeiro, tem toda uma interface gráfica e animações pro app. Resolvi fazer isso durante a tarde, na VFS, acometido de uma súbita saudade de motion graphics usando o After Effects. Nesse processo, também aprendi várias coisas sobre o sistema operacional do celular, fontes próprias, resolução, formatos de vídeo reconhecidos e tudo mais, até conseguir fazer a parada funcionar. Ficou meio pequeno, mas bonitinho.
Segundo desafio, tem que ser uma puta lente grande angular. A 24-105mm era muito fechada pro que eu queria, então resolvi testar mais processos usando a 8-15mm fisheye, que a gente usou pra fazer os HDRIs. O grande problema da fisheye é que ela tem uma distorção imensa, então não dá pra fazer Camera Projections com as imagens a menos que elas estejam “retilíneas”. Fiquei umas horinhas ontem testando com uma foto do corredor daqui de casa, e consegui fazer funcionar pra tirar a distorção e as coisas encaixarem direito no Maya. Próximo passo, descer pra todas as minhas locações de teletransporte e tirar fotos, tanto pras camera projections quanto para as imagens dentro do app.
Carreguei comigo uma trena, caderno e caneta, porque é fundamental que tudo esteja em escala real, senão na hora de colar tudo, dá uma merda fenomenal que leva uma vida pra arrumar. Mais rápido e fácil fazer certo desde o começo, ainda que leve uns minutinhos medindo coisas. Voltei pra casa, corrigi a distorção em todas as fotos, e por segurança já pulei no Maya pra camera projetar as coisas. Se não tivesse dando certo, voltava e tirava novas fotos. Fazer a última Camera Projection me levou uns três dias da última vez. Ontem eu fiz três em uma hora, e todas funcionando bem bonitinho. Tem umas coisas de textura pra pintar, mas as locações foram escolhidas pra facilitar minha vida – corredores e lugares cúbicos/quadrados. Tá rolando.
Hoje cedo rodei uns planos teste pra acertar o timing das animações dentro do app, testei usar luvas pra encostar no celular – o problema de a luva atrapalhar a sensibilidade da tela nesse caso foi uma grande vantagem, que me deixa apertar e mexer em mil coisas na tela sem os menus aparecerem por cima do meu vídeo. Bom, depois dos timings acertados, comecei a rodar as coisas, acho que tenho o que preciso, tô convertendo o material de RAW pra algo mais editável e trackeável. Ainda tem muitos passos pra dar certo.
Espero terminar no final de semana, porque a entrega é terça que vem, aí apareço aqui com as coisas. Me animei muito pra fazer porque deu pra colocar uma historinha no meio.
The second part of the previous post, and this is where things should start to get at least a little more interesting.
Working on my demo reel for Vancouver Film School, I decided to, besides all the VFX stuff and technical aspects of it, try some new stuff with cinematography, experimenting with a style that always gets my attention for it brings together a series of elements I believe work amazingly in terms of immersion and getting the audience into the story head on. That is diegetic cinematography, is making the camera a part of the characters’ world. It’s an object that they see, use and interact, which is also used to tell the story. This has some immediate consequences that aren’t standard through film history like “there is no fourth wall”, the characters know they’re being filmed, they interact with the camera but just because they have a relationship with the one holding it.
We’ve seen it before several times among sci-fi – Chronicle (2012), Cloverfield (2008), Project X (2012), Project Almanac (2014) – and the whole genre of “Found Footage” horror – Blair Witch Project (1999), Paranormal Activity (2007), V/H/S (2012) or [REC] (2007) among many others. I won’t focus on the horror movies at this point. There are huge articles about the Found Footage genre, and I’m no expert, but I’d like to discuss what this kind of camera work brings to the story: first of all, the audience knows exactly as much as the characters. Hitchcock says the key to tension is giving key information to the audience, that the characters don’t know about. Like whenever we know the killer is at the victim’s house way before the crime takes place in the screen. We – as audience – worry because we foresee what’s going to happen and it’s the wait that causes the thrill.
When the camera is a character, if the audience knows something, so do the characters, and here the thrill comes from the fact that we have the suspicion that something bad might happen or WILL happen, but we don’t know exactly what, when, or to whom. Whenever it hits, we’ll be as surprised as they are, thinking of ways out the same way they’re doing. For me, this is a boost in terms of imersion and also a challenge. Since we’re so close to the characters, whenever they act in really stupid ways we’re thrown out of the movie, they’re not convincing anymore. Like any horror movie, when people go to “check their basement when the lights go off”, or think it’s “a good idea to face the bad guys breaking into their homes”. Regular people like you and me would never do these things, I don’t have a hero complex, if I think it might be dangerous, I’ll flee or hide!
While reading on this subject to see what other people think, I came across a very small number or articles, none of them really deep, and with very different opinions, so it’s time to make clear that I’m not defending that the viewer is a character in the movie, since we see through a character’s camera. A movie is much different from a game, all the choices have been made from the start. There’s no interactivity, I’m not saying we’re seeing through the eyes of a character. I think first person videos are awesome, but I wouldn’t say that is diegetic cinematography because there is no camera. We act different when we’re just talking to someone versus when we’re on tape.
Using the argument of “we remember things” as a comparison to recording is not valid. We haven’t got to a point where people have cameras embedded into their eyes yet. The proof that people act weirdly on camera, even if the camera is someone else’s eye is all the hard times Dr. Steve Mann goes through. There’s also a great Black Mirror episode (The Entire History of You, 2011) about implants that turn our eyes into cameras, but that’s still sci-fi, and it takes place somewhere in the future.
Following this line of cinematography, Christopher Campbell wrote an article discussing why that is weird, bad in a standard way, but interesting if done properly since it’s different from anything we’ve seen so far. He specifically talks about Hardcore, currently in post, made by the same guys who did the Biting Elbows – Bad Motherfucker music video, which is shot entirely from the protagonist’s point of view (POV). Campbell makes a comparison between literature, games and movies, establishing a clear difference between books written in first person and movies in first person.
So, my definition of diegetic cinematography requires a physical camera being held by one of the characters. When that happens, the camera usually has a specific purpose inside the film itself. In Project Almanac they’re documenting their progress through an experiment, in Cloverfield, the guy is in charge of filming the farewell party for one of the main characters. Project X and Chronicle, though, have very different approaches that make sense in today’s culture. The title says it all, Chronicle, “a historical account of events arranged in order of time usually without analysis or interpretation”. There’s no manipulation of time, the editing just goes forward. We don’t see the same moment twice, we don’t have flashbacks or forwards. Phones can shoot video, we have a plethora of social networks based around video, or that support to video uploads (YouTube, Vimeo, Facebook, WhatsApp, Snapchat, Vine, Instagram and so forth). We take way more pictures in our everyday life just by having a half-decent camera in our phones. We don’t worry that much about framing, image stabilization and such for our videos. These are just small chunks of memories, shot in chronological order. They’re usually more important to ourselves than to others. Sure we share them, our current culture revolves around showing where we’ve been and who we met, all very much time stamped.
Nikon just released a whole campaign based around that, calling the current generation “Generation Image“. Not so long ago we had (we still have some) very long discussions regarding what “qualifies” a photographer. Are amateur photographs taken with a phone camera as valid as the ones taken by someone who studied the craft for years and use expensive gear with the single purpose of taking photographs? If we’re talking about interviews and scheduled events, sure, that is debatable, but what about natural disasters, conflict areas and other situations where stuff just happened, and when the Professional gets there, the event is already past? What tells better the story of a gas explosion inside a shopping mall: a high-megapixel count photograph of some ruins with sharp focus hours after the event or one taken in the food court with a phone, at the time of the explosion, all blurry, but good enough to understand what’s happening? Nightcrawler (2014) is a great movie kind of related to this subject, with lots of camera on the screen, but no main diegetic cinematography.
John Powers, when writing about Chronicle, has an interesting point when he says this shift between traditional media/cinematography and amateur recordings began back in 2001, with the attack at the World Trade Center. While most media networks were running towards the area to shoot their own footage, thousands of people around the buildings were already doing this on their own, simply because they could. I’ll come back to John’s review later on.
It’s not hard to tell the difference between a professionally shot video and one done by someone who had the sole purpose of recording the events. Actually, it’s quite easy to spot which is the Pro and which is the amateur. Then Hollywood comes in and turns the “amateur look” into a style. What are the benefits?
First off, it has a much more “real” look, as if that isn’t a movie, carefully written, planned and executed. We relate to the characters because that’s the way we’d film if we were in that situation. The handheld, shaky camera, also called “documentary camera” has this name because documentaries usually have small budgets and it’s focused on reality. Real people, real lives, real intentions, no actors. When the first portable camera came out, documentaries blossomed. After some time, what was considered a flaw – the shakiness and curiosity of documentaries cinematography – was brought into fiction with mockumentaries, behind the scenes that seem as amazing as the real scenes and much more.
Steve Bryant, in his article about the camera becoming a character, says this is a negative thing. What should work as a bridge between audience and show actually sets them further apart because there’s now two layers of fiction (behind the scenes AND the show) instead of just one (the show) and we don’t notice that. We feel closer to the actors, we think we know the ones behind the characters, when what’s really happening is the actors are acting about themselves as well. It’s confusing, but makes a lot of sense in the end. How does this relate to the diegetic cinematography thing? Well, the show wouldn’t count as diegetic cinematography, but the behind the scenes would, since many times the camera operators are just as real as their subjects.
Ok, so we got a reality and empathy bonus because that’s how the audience would film. I also believe this is much easier for the actors, because it’s much more natural. The hard part is making it feel right. Once you know how to handle a camera, professionally, it’s easy to make a mess and say it’s amateur. The key is to know how much messier it should look, which are the main points and reactions the audience has to see and which ones are best when just outside the frame. How close to danger are our characters willing to go? What’s their relationship with the person handling the camera? Do they care, are they pissed about it? Are they filming just to keep a record of events, or for sharing? And the ever present question “what’s more important now, the camera or whatever’s happening at the same time?”, this will mostly dictate framing and might have influence on editing as well.
There’s one issue, though. One downside that’s very hard to avoid: sequences of shitty, blurry, shaky images while our characters run. For many people, these are huge turn-offs. They feel sick, dizzy or worse. I myself have a high threshold for shakiness, but every once in a while I see something so confusing that makes me question the process.
This is what I’ve been experimenting recently to tell my story loaded with visual effects. I’m wondering which side will win: the reality from the cinematography, or the out-of-this-world aspect of the visual effects involved. I’m also gonna question the editing process, but this post is already too long and confusing to include that!