First time shooting like this on my own. The video turned out WAY LONGER than I expected so I’ll be much more concise during the reviews. This one can be longer too, because there are lots of rules and I wanted to make them all very clear. I planned to do this much more carefully, with better lighting and all, but since one of the lenses is going away on Monday I had to start early or totally miss one of the reviews. I’ll think of it as a pilot episode, testing for feedback and response.

It’s been a little while since I decided to make video reviews for the anamorphic lenses I have. Ever since then I have thought a lot about the format and “rules” to be followed so the videos could be compared to each other and I don’t end up doing a lot of subjective work (since these are lenses that can be held against each other and some have clear advantages over others). I also didn’t want to make boring endless charts and stuff like that, because it’s hard to watch and keep focused if there’s nothing interesting happening at the screen. I say that because I’m terrible at watching lens tests whenever they’re too boring, I want to convey the feeling of what can be achieved with the lens, but I also gotta set myself some limits: last time I thought of making a “test video” I ended up doing a full on webseries pilot followed by a 100-page essay, so gotta tone down the creativity a little.

This post describes what I got so far and I’d love to hear from you what you think might work, what might not and other interesting things to get in the videos.

First of all, everything will be shot with a Canon 5D3. That’s the camera I have and I don’t plan on buying any other soon, which means full frame. Good thing is, from there you can easily convert into smaller sensor sizes and figure out what is or isn’t covered for different cameras. If anyone wants to give me another camera for the tests, free of charge, I have no problem with that! Hahahah!

For taking lenses, again, I’m not going far, using what I have, which is also a standard prime set. Mir 1B, Helios 44, Jupiter 9 and Tair 11 (they translate to 37mm f/2.8, 58mm f/2, 85mm f/2 and 135mm f/2.8). They’re all russian glass and have been known to work well with anamorphics. I wish I had a more “modern” set, such as Contax/Zeiss, to compare both vintage and modern looks, but I’m not spending any money on this.

Whenever I’m filming the lenses to show build quality, how they work or anything like that, I’ll be using Canon’s native H.264 codec, for there is no need to spend any more bytes with that. For the actual technical testing (charts and stuff), I’m shooting RAW with Magic Lantern, at 1080p resolution with no post processing other than VisionLog camera profile, so the footage is as flat as can be. We want to see the maximum amount of detail the footage can hold, so no grading on this part, no contrast, no nothing. Plain log. Videos will be uploaded to both Youtube and Vimeo, so users can download them and check for finer detail without internet’s compression.

The technical aspects that will be analyzed are build quality, sensor coverage (both full sensor and a standard 2.4:1, Cinemascope, crop), current price and availability, sharpness (at f/2, 2.8. 4 and 8, comparing corners and center). Sharpness tests will be done with charts in the first part of the video. I’m also gonna test them with the diopters I have here. Minolta +0.4, Iscorama +0.5, Fujinon +1.25 and Canon +2, all achromatic doublets that should improve the lenses’ performance. I’ll always comment on the focusing method for each lens, since there are several different ways of doing it, and people are always confused about it. Flares will also be tested, using a regular smartphone flashlight.

The output footage will be unsqueezed by REDUCING THE HEIGHT instead of increasing the width, which holds more detail and is the most common process to properly unsqueeze footage. This will lead to black bars above and below the frame. On these areas I’ll put all the technical information I can about each shot (f-stop, ISO, shutter speed, taking lens, anamorphot, diopter, white balance) so anyone can quickly see what changed from shot to shot.

After all these techcnical stuff, which shouldn’t play for too long, there’s a “real world test”, which consists of 10 handheld specific shots, 5 “well lit” and 5 at low light, consisting of a close up shot, medium shot, infinity focus, a rack focus and one extra that I haven’t decided yet. These will also be shot RAW as the charts, but will be presented graded. I’d love to tell short stories with them, but I’m not sure it’ll be possible.

After all this craziness, I plan on sharing a couple DNG frames from both charts and real-world tests so anyone can push them to any limits they like or look at each individual pixel, hell, I don’t know, not my problem!

So, is it kind of clear or confusing? Am I missing something? Is there anything else you’d like to see?
I’m also thinking of shooting a short video explaining all of this, instead of having it just written here.

Aboio Avoado – Lenine.

Não gosto de postar letras de música totalmente fora de contexto, mas depois de ouvir essa umas trocentas vezes agora de manhã, achei que valia a pena! Vale a pena ouvir também.

Era um delírio danado
De queimar as pestanas dos olhos
Um tremor batendo no peito
E esse adeus que tem gosto de terra

Ah! Meu amor!
Não se entregue sem mim
Ah! Meu amor!
Eu só quero avoar

É Difícil de Explicar.

Ontem eu tava rabiscando umas teorias de como terminar um assignment, planejando meu curso de ação assim que o computador terminasse de renderizar uns frames, quando me ocorreu um pensamento peculiar: “se eu tivesse que explicar esse assignment pra meus pais, o que eu diria?”, e foi aí que me toquei que é difícil (quase impossível) conversar sobre as coisas que eu tô estudando aqui com qualquer pessoa que não tenha passado pelo mesmo processo ou trabalhe na mesma área. O resto do mundo que se foda, mas fiquei um pouco incomodado com o fato de meus pais não terem muita noção do que eu faço, e o motivo principal é: tudo que eu faço aqui tem como objetivo parecer o mais real possível, quando na verdade não é. O problema é que quando o resultado funciona, não dá pra ter idéia do caminho percorrido. Tem os breakdowns, onde a gente mostra umas partes do processo, mas é algo mais técnico, e menos “olha só o que eu fiz!!”. Bem diferente de uma profissão mais “tradicional”, digamos, onde as coisas que você faz são visíveis e palpáveis.

Espero que esse post não se enrole muito, porque vou tentar definir o que é que fica me apertando o juízo nessa história.

Quando fui fazer audiovisual, não tinha comparação. Você começa com nada, e acaba com um filme, você fez aquele filme, provavelmente não todas as funções, mas dá pra descrever. Trabalhando como câmera, todo mundo sabe o que é uma câmera, lentes, trilhos, se pendurar nas coisas, fazer gambiarra com luz, porque isso tudo faz parte do dia-a-dia da vida, não só do set. No set a gente só “reutiliza” as coisas da vida. Até no processo do TCC, a revisão passava por minha mãe, porque se ela conseguisse acompanhar a leitura, era algo acessível pra quem tivesse interesse, mesmo que pouco ou nenhum conhecimento prévio. Esse era o objetivo do TCC, disseminar um conhecimento que me tomou um puta tempo, de uma forma simples, pra possibilitar o acesso a mais pessoas que não tinham o tempo ou os recursos que tive durante a pesquisa. Em várias ocasiões eu falei, e continuo repetindo, se dá pra colocar na internet, de graça, esse é meu caminho preferido. Por que? Porque assim você não limita o acesso, e sim expande, é de graça, qualquer um pode ter, tá na internet, qualquer um pode acessar, não dá pra ser muito mais livre que isso.

Agora voltando pra VFS e traçando um paralelo com AV. Em AV eu tava por trás das câmeras. Em VFX, eu tô por trás de quem tá atrás das câmeras. São mais camadas de “ficção disfarçada de realidade” pra chegar no meu trabalho. É fácil saber que as pessoas no filme são atores e não os personagens em si, é fácil saber que a luz é manipulada por alguém, assim como o som e todos os ambientes que aparecem na tela. Já é mais difícil saber se em determinado plano o jardim do lado de fora da casa – desfocado e sem objetivo narrativo óbvio – é real ou foi colocado ali digitalmente. É difícil saber quando uma tatuagem sumiu, um microfone foi apagado, uma PESSOA INTEIRA foi apagada do plano, se X e Y objetos são reais ou digitais. E um dos problemas nessa história é: se o público consegue perceber, é porque o trabalho devia ter sido feito com mais cuidado.

Ok, espaçonaves no céu, algumas explosões, prédios futuristas e essas coisas são mais fáceis de dizer “Ah! Digital!”, porque a gente sabe que isso não existe nos dias atuais, mas é tudo tão baseado na realidade que mesmo os maiores absurdos se costuram com a realidade e ninguém fica reparando na textura-do-vidro-daquele-arranha-céu-que-não-existe-no-mundo-real-mas-existe-no-filme, porque não é esse o objetivo mesmo. E pior, algumas delas podem ser de verdade, construídas para o filme, no mundo real.

Argh, tô sentindo que tô perdendo o ponto da discussão.

Tenho certeza que todo trabalho tem suas coisas mais técnicas e complicadas, que só dá pra conversar de verdade com outras pessoas da sua profissão, mas também tem elementos simples, que dá pra explicar pra qualquer um em cinco ou dez minutos. O que eu tô procurando agora é esse elemento no meu trabalho. De forma objetiva, a pergunta é “Como montar uma frase compreensível, mas não supérflua, sobre o que faço, sem termos técnicos ou glamour exacerbado?”. Porque eu posso virar e falar “ah, eu faço ilusões”, mas, pelo amor de deus, né? Não vamos forçar a barra – mesmo porque isso é tema pra outro post.

Bom, vou terminar esse por aqui, e se alguém pensar na frase, por favor, poste nos comentários. Enquanto isso, vou pensando por aqui, e tocando o demo reel pra frente.
- Acho que o fato de aprender tudo em outra língua colabora com a dificuldade desse quebra-cabeças.

Em Marcha Lenta.

Depois do post preguiça do break, o ritmo ainda não mudou por aqui, apesar de eu ter a nítida sensação de que deveria estar mais preocupado com o andar da carruagem. Tô fazendo tudo com calma, talvez até demais, e dando passos pequenos, dando mais importância pro que me dá vontade de fazer do que pro que eu tenho a obrigação de fazer. Mesmo assim, tô em dia com as coisas da escola, até adiantado em alguns assignments e o demo reel tá andando com alguma estabilidade.

Estamos cada vez mais perto da primavera – o horário de verão começou hoje de madrugada – e tem mais de quinze dias que não vemos um dia nublado, só Sol, o tempo todo. Os dias nascem mais cedo e terminam mais tarde também, o que é uma maravilha porque o Sol dá uma animada na vida inteira. Ainda tá frio, no geral, um frio dos infernos (mínima de 1 máxima de 12), mas há de melhorar nas próximas semanas.

Comprei uma lente – uma que tive enquanto morava em São Paulo – linda e maravilhosa (50mm f/1.2), por dois terços do preço original, e tá perfeita. Eu e a May estamos planejando fazer umas fotos, inspirados pelo álbum novo dos Punch Brothers, Phosphorescent Blues, que estamos ouvindo loucamente – apesar de as músicas estarem aqui há quase dois meses, só começamos a ouvir nessa semana. Tem uma entrevista muito legal onde um crítico fala de coisas por trás das letras e do feeling geral da produção, que foi o que deu o estalo pra fazer as fotos: o disco todo, apesar de ter uma pegada bluegrass, tem letras modernas e fala/critica de um jeito muito elegante a nossa dependência de celulares, computadores, redes sociais e essas coisas todas que nos viciam em frente às telas. As fotos vão ter bem pouca luz, e por isso a lente tem ainda mais valor nessa brincadeira.

Além dessas coisas, tenho jogado muito Dead State – do post anterior – mas acho que cheguei num ponto de virada – na vida, não no jogo – porque ontem foi o primeiro em… quinze? dias que passei sem jogar nada, só fazendo coisas do reel e assignments, e me divertindo um bocado no processo. Estamos vendo um monte de séries também, e cinema voltou ao menu.

Durante o break – ou logo antes dele, não lembro – fomos ver What We Do In The Shadows, um mockumentary sobre vampiros nos dias atuais. Vale muito a pena, não é um filme longo, e a gente riu um bocado porque as piadas são muito inteligentes. Lado mais positivo: já tem torrent por aí.

Bom, esse post tá com muita cara de diário, então vou parar por aqui, porque sim. Acho que vou tentar escrever mais um, sobre um tema menos mundano.

Décima Temporada.

Já tem bem uns dois meses que eu tava querendo trocar de temporada aqui, e nada de idéias pro título ou pros banners. Acabei indo com um estilo mais minimalista em comparação com as últimas, mas acho que combina com meu estilo mais recente.



10ª Temporada: Hovering Lights

Passei dias (semanas) sem a menor inspiração pra escrever nada por aqui, sem temas e coragem, e hoje que tenho um gazilhão de coisas pra fazer, me vem quatro posts quase prontos na cabeça. O ritmo tá meio estranho porque tenho escrito mais em inglês do que português, mas vamos tentar. Bom, esse aqui era só pra registar a mudança de temporada, então o serviço tá feito. Já sinto falta da May nos banners!

Dead State.

On Term 4′s last week of classes, during the break and until now, I’ve been playing Dead State a lot. A. LOT. Before I start talking about the game itself, a quick reminder from past experiences: I’m kind of traumatized by Fallout 2. It was a turn based game – an awesome one – that I never managed to finish, even though I tried several times along many years. I got very close to the end once, and the computer simply fried, burning my save files along with the motherboard and hard drive.

The main point mentioning Fallout 2 is because I felt many similarities between both games and this might have affected my addiction to Dead State.

First off, I bought it when it was still in Early Access on Steam, with only 7 playable days and limited locations. It’s a zombie game different from the standard go, kill, keep moving, first person shooter, gore, save the world. It’s more like an elaborate RPG, where you play as a survivor from a plane crash in the middle of the zombie apocalypse. From there, you are brought into a school and the best chance for survival is fortifying the place and going out to gather supplies. The school can be upgraded in various ways – garage, workshop, chicken coop, generator, fences, and so forth. You also have a limit of four people in your raiding party, which means that most of the time someone is gonna be left at the school. From the job board you can assign everyone’s tasks and see their progress as the hours and days go by.

Dead State has many variables, and this is one of the game’s strongest aspects. The group’s morale is based on how you’re doing gathering food for everyone, medicine for the ones infected, fuel for the generator, keeping the place working (broken fridges, toilets, damages to the fence). As you explore the school’s surroundings you end up finding other survivors like yourself. From this point you can have different approaches, trying to gather as many people as you can find to improve the shelter and offer them safety and food – which makes it harder to keep the food and fuel supply good – or keep a small number of loyal people, that will never question your decisions. And, boy, there are lots of decisions to be taken. Every once in a while another survivor comes up to you to ask for a day off because they’re sick, tired, not feeling well or something like that and you have to decide if you can go one day without their work or if you need them on point that day. There are also crisis events where more complex situations arise and you must take decisions that will not please everyone in the shelter – once I had to decide between cleaning our water supply or fortifying the fence. I didn’t have enough spare parts to clean the well which ended up poisoning more than half the crew, rendering them useless for a couple of days. In these crisis meetings there are key characters that have their ally base, like politics and so, keeping majority on your side is always a good thing.

There are also conflicts between different survivors because their interests are totally different and it’s up to you to decide what to do to solve their problems.

Combat mechanics are a little weird at first, because it’s a turn-based game, so you do your thing and wait/watch your enemies react/counter. In the beginning each battle takes hours to play through. After some days you’ll get better gear and improve the group’s stats, which speeds up the undead killings. Then you start to meet other looters, gangs, mercs and soldiers just to make your life harder again. I’m still in day 40 of the infection and you simply can’t take a day off. If you don’t go out scavenging, food might not be enough, the generator runs out of fuel and that kind of thing. Once I had to keep exploring through the night – much more dangerous due to the increased number of undead and also harder to see and strike – because I didn’t have enough antibiotics for everyone back home. After midnight, the game considers your party isn’t going to get enough rest, so everyone has a fatigue penalty during the following day.

Every once in a while I think the game is becoming repetitive and then it surprises me with hardcore enemies, or allies asking for very specific items in hard-to-access areas – like a tattoo gun, medicine books, guitar strings and that kind of thing. If you ignore an ally request for too long, they also get pissed and respect you less. They have their own “wanted” items that improves their mood, which also affects the shelter’s overall morale, so whenever you come across specific items on the field like cigars, deodorant, chocolates, coffee, rechargeable batteries and such, it’s better to get a hold of them than food itself, just because they’re harder to find.

More on exploring, at first everything is on foot, you can only walk to places and this takes a long time. After a while, if you rescue the right ally, she says there’s a horse farm nearby and you can raid the place to get some horses for the group. Of course, the horses require feeding every day. After some more time, you’ll find a mechanic that can fix the car in the school’s yard and that moves even faster besides providing you with a sizeable trunk to carry more loot back home. It uses fuel, so it’s always good to take that into account before going out with it – and if you run out of fuel in the field, you’ll have to send someone out there to rescue the car, which also takes some time. Each character’s weight capacity is determined by their strength attribute, which also affects their melee damage.

I started writing with a very clear idea in mind and I totally lost it. Overall, if you played Fallout 1 or 2, and liked it, give Dead State a chance. It still has some minor bugs and glitches but, overall, it’s an inovative take on the zombie genre that’s is kind of getting exhausted lately.

Quebrando o Break.

Tivemos nosso break entre Terms nesse fim de semana. Vários planos, andar de bike, passear, ir ao cinema, e por aí vai. No fim, resolvemos ficar em casa direto, saindo só pra comprar comida. Passar esse tempo sem obrigações e coisas pra fazer, só com a May, foi uma maravilha. Assistimos um monte de coisa juntos, fizemos bolo, jogamos videogame, várias atividades domésticas. A preguiça venceu e por quatro dias eu nem lembrei de nada de demo reel.

Esse post é só pra dizer que eu ainda tô vivo, e tenho umas coisas pra escrever, mas a preguiça persiste. Nos vemos em breve!

During last week, I spent most of my days working in one assignment instead of making real progress with my reel. Why the heck did I do that? Well, I have to go back to last Sunday, while I did the rigid bodies assignment, linked on the previous post. During render time, while Maya wrote out all my frames and passes, an idea came to me. From the Set Extension class, I realized that this could be a very mindfucking assignment and I really wanted to play with portals. Have things crossing walls, or looking forward and seeing myself from up top, weird stuff like that. I couldn’t focus in class and didn’t take any of the plates we shot at the studio downstairs.

Then, the portal thing finally clicked, and I thought it would be super cool if I could go through it, using a phone app to choose my destination. I scribbled some notes of what had to be perfect, ideas of how to shoot it and work it out, avoiding as much complications as I could, from the beginning. For starters, I would love a very wide angle, first person look, with the phone on the screen, so I could show the interaction between the app and the real world, several different destinations, and finally, the one I’d cross to.

I let the idea grow for one more day and got to work on Tuesday, because I got really inspired to see if 1) I could get it done, 2) I could get it done IN TIME (which meant before tonight). I finished yesterday, and you can see the result below. After the video, I’m going really crazy and try to explain how/what I did during the process, because these breakdowns aren’t nearly enough to explain the mindbending I went through.

First of all, I thought about having a greenscreen and markers on the phone and replacing that in post, but I already had too much work on, so it was easier and faster to design and animate everything that would happen in the app in After Effects and just play it while recording my plates. In this process I had to find out android’s favorite video formats, screen resolution, how to output it from After Effects and, later on, add a guide soundtrack so I knew what animation was going to happen and when, in order to time my actions on the plate.

This took my Tuesday afternoon. When I got home I grabbed the camera and took a couple test shots of the corridor – where all the action takes place – because I wanted to go really wide angle, and my only available option was Canon’s 8-15mm fisheye lens. It has an amazing field of view, but the downside is that everything comes out fisheye-distorted, which means it’s impossible to camera project anything – which is key to my matchmoving and all the environments changing at the back.

With these distorted shots I tested Nuke’s LensDistortion node, which gave a kind-of-ok result, but really messing up with the corners of the frame. Also, it was impossible to bring it back to the original image – couldn’t figure out why. It creates a weird circle with black background and everything punched inside. anyway. For my plates, this could work. For the camera projections, not so good. I went then to see if there were any good “defishing” techniques using Photoshop and got great results with their custom presets for the Lens Correction filter. It requires you to install some extra free packages with Adobe Air, but it’s very quick and simple. In there, sometimes the 8-15mm showed up on the list, sometimes not, so I picked the regular 15mm fisheye along with a full frame sensor and the results were very very interesting. With these, I went into Maya and quickly buid some cubes to see if it was camera “projectable”. It worked.

Green light for the shooting then.

I started with the pictures for all the environments. Set the tripod at the same height and level for all the pictures, lowered ISO, closed aperture and selected a very low shutter speed so I could get as much depth detail and as little noise as possible. Not a hard achievement using the 8-15mm. Then I stuck a tape crosshair at my chest at the aproximate tripod height, so things wouldn’t look too weird when combined. I chose environments that would be very simple to build (hallways, which are pretty much cubes) and with lots of depth to them. There are some examples below.




Before shooting the plates I also took some extra pictures to help me building those environments and filling the holes in the projections. Measurements were also very important to speed up the process and make sure everything would match completely. All the corridors have about the same width, so that would also be a good way of spotting any weirdness that could come up.

I shot my first plate a couple times to get the timing right, focus pulls and camera movement. I tried to avoid covering the door with the phone as much as possible – to avoid roto – but that wasn’t so successful. Then I went down to the Sub Basement and shot my second plate using the same technique. I was worried about how to link them together – no idea at all, at the time of shooting – and if they would track properly.

To help me with the timing for the phone animation in the second shot, I have the animation playing for ten seconds and myself counting up. When it reaches ten, the glitches appear and the screen goes black. I needed to do all my transition and regret in around seven seconds, then look down, click the phone just to see it die and turn to the elevator. It was more complex than it looked.

After shooting everything, I was still worried about this fisheye look, so I undistorted all my environments and tried to project them. Got three out of four done in less than an hour. The laundry room seemed to have a lot more detail, it was already past 10pm and I should get some rest.

Day 2 – Wednesday – was a nightmare. Tracking worked well with the Nuke-undistorted plates, but whenever I tried to export the mesh created from the point cloud, it would crash, die, burn and so forth. Tracking took more than one and a half hour! In the meanwhile I tried to figure the logic behind crossing between plates and finally nailed what should happen. I didn’t get it to work on this day, though.

I finished all my projections while the tracking went bonkers, and by the end of the day I decided to ditch Nuke’s undistortion and go with Photoshop’s for the plates as well. I kept the node because I wanted to bring the fisheye look back into the final result, and that was achievable, with some cropping.

With all my projections done in Maya, I brought them into Nuke, along with the re-tracked plate – didn’t even bother with the point cloud this time, just got the geometry through another (fifth, or sixth, by now) camera projection. Aligned it all to the grid and exported as alembic. In Nuke, I placed all my other environments behind the door and animated them according to the app animation. I used cards and a random concrete texture to cover the gaps between them. Defocus with keyframes solved the focus pulling issues and oFlow got me the proper motion blur.

For Day 3 – Thursday – I refined my script, checked the tracking a thousand times and did all the roto work. This day went fast. I ran out of songs to listen while working, so I had to look for new stuff online. Doesn’t happen every day. So I rotoed (?!) the door out, and brought back the phone and my fingers when they went in front of the hole. The work on the door was awful, jumping as crazy, because there isn’t much stability and continuous motion when you are handheld walking while doing everything else at the same time. Each rotos had around 180 keyframes, for 250 frames of footage.

In Day 4 – Friday – my goal was connecting both plates, this was the cherry on top, because it wasn’t mandatory for the assignment, but I really wanted to do it, so I left it to the end.

What happens is: I have two 3d tracked plates, which means two cameras. First camera, inside my place, goes outside the door, which means that, from certain point, there is no more reference of the initial environment. The cut has to happen after this point. From the second plate, I needed to pick a frame not too close to the beginning, so I could transition from the first camera into the second – using a third camera and a keyframed parent constraint and keyframes for focal length because Nuke gave me slightly different numbers (12mm for one and 14mm for the other).

This part took half my morning, then I had to decide whether to fix the door’s roto or to add a portal effect that would benefit from the jaggedness of the mask. The principle behind the portal is the same as heatwaves. Noise that distorts what is around, affecting both foreground and background, changing constanly and waving around. I based it off the door’s alpha with good results. Had to do some keyframing to make it bigger near the end, fix colors and stuff. Then I noticed the portal was cutting into my hand as soon as the phone and fingers masks ended. More roto. Yay! Luckily, it wasn’t that much, and not so much movement either.

As soon as the portal looked good, I went back to the connection challenge.

I needed to decide exactly in which frame I was going back to live footage. Once that was picked, I had to camera project the very previous frame, using this final animated camera, and paint the gaps in the projection so it would fit between going out the door and turning into the other tracked camera. This was by far the most confusing part, in which I camera projected three different images that didn’t work, because I couldn’t figure out how to do it.

When I finally got it to work, all that was left was a crazy amount of painting and perspective to cover the huge amount of corridor that was untextured. Even harder, the painting had to be done UNDER the original texture, so the image wouldn’t jump on the frame it went to footage again. How do I know this? Of course, I painted it wrong the first time. The seamless paint took me way longer – like hours longer, getting to the point of painting single pixels that looked wrong. This was my Friday night.

Day 5 – Saturday – was light, I had to cleanup the markers outside the door, barely visible, and only for a couple frames, and figure out how to do breakdowns for this thing that almost melted my brain. Ended up oversimplifying because there was no way to explain all of this in a few seconds of video. Then, add sound effects, bring back lens distortion and a final grade.

Set Extension Rush.

Depois de passar mais de 8h no Domingo ajustando e modificando meu assignment de rigid bodies (leia-se: newton aplicado em 3D, com gravidade, impacto, massa, essas coisas legais todas), eu literalmente não vi o tempo passar enquando acertava câmera, render passes e morria pra processar o motion blur. Depois foram mais quatro horinhas de render e meia hora de comp pra poder chegar nesse resultado aí embaixo. Gostei um bocado, e adoro ficar vendo os tijolinhos voando loucamente quando a câmera passa no meio do caos.

Depois disso, descobri que tem um assignment de Set Extension pra semana que vem, que eu achei que tinha duas semanas pra fazer. A gente teve uma aula pra filmar coisas no estúdio, mas eu tava cheio de idéias e não consegui pensar em nada. Queria algo com portais, mindfuck, essas coisas. Depois de vários dias com o pensamento esquecido, pensando em como integrar camera projections e vídeos de forma imperceptível, transitando de um pro outro com cortes planejados, a idéia final veio nesse Domingo também. Segunda feira eu rabisquei umas questões técnicas, mas como tava sem câmera, não dava nem pra começar.

O plano era ter um app de celular que interage com o mundo, abrindo portais. Dois desafios já de cara: primeiro, tem toda uma interface gráfica e animações pro app. Resolvi fazer isso durante a tarde, na VFS, acometido de uma súbita saudade de motion graphics usando o After Effects. Nesse processo, também aprendi várias coisas sobre o sistema operacional do celular, fontes próprias, resolução, formatos de vídeo reconhecidos e tudo mais, até conseguir fazer a parada funcionar. Ficou meio pequeno, mas bonitinho.

Segundo desafio, tem que ser uma puta lente grande angular. A 24-105mm era muito fechada pro que eu queria, então resolvi testar mais processos usando a 8-15mm fisheye, que a gente usou pra fazer os HDRIs. O grande problema da fisheye é que ela tem uma distorção imensa, então não dá pra fazer Camera Projections com as imagens a menos que elas estejam “retilíneas”. Fiquei umas horinhas ontem testando com uma foto do corredor daqui de casa, e consegui fazer funcionar pra tirar a distorção e as coisas encaixarem direito no Maya. Próximo passo, descer pra todas as minhas locações de teletransporte e tirar fotos, tanto pras camera projections quanto para as imagens dentro do app.

Carreguei comigo uma trena, caderno e caneta, porque é fundamental que tudo esteja em escala real, senão na hora de colar tudo, dá uma merda fenomenal que leva uma vida pra arrumar. Mais rápido e fácil fazer certo desde o começo, ainda que leve uns minutinhos medindo coisas. Voltei pra casa, corrigi a distorção em todas as fotos, e por segurança já pulei no Maya pra camera projetar as coisas. Se não tivesse dando certo, voltava e tirava novas fotos. Fazer a última Camera Projection me levou uns três dias da última vez. Ontem eu fiz três em uma hora, e todas funcionando bem bonitinho. Tem umas coisas de textura pra pintar, mas as locações foram escolhidas pra facilitar minha vida – corredores e lugares cúbicos/quadrados. Tá rolando.




Hoje cedo rodei uns planos teste pra acertar o timing das animações dentro do app, testei usar luvas pra encostar no celular – o problema de a luva atrapalhar a sensibilidade da tela nesse caso foi uma grande vantagem, que me deixa apertar e mexer em mil coisas na tela sem os menus aparecerem por cima do meu vídeo. Bom, depois dos timings acertados, comecei a rodar as coisas, acho que tenho o que preciso, tô convertendo o material de RAW pra algo mais editável e trackeável. Ainda tem muitos passos pra dar certo.

Espero terminar no final de semana, porque a entrega é terça que vem, aí apareço aqui com as coisas. Me animei muito pra fazer porque deu pra colocar uma historinha no meio.

« Older entries