Modern cinema follows a strict set of rules, a long preestablished language composed of conventions and agreements employed and accepted respectively by both the filmmakers and the audience. For example, when we see a close up of a character’s face followed by another close up of his or her hands tightly clasped, we understand that character is trying to hide emotions from another character who is looking at them, but he or she can’t hide it from the filmmaker’s gaze. We do not think that the first close up is a body-less head, nor that the second is a pair of chopped hands, but creating this meaning for the audience – that these shots were clues given by the filmmaker on the inner workings of said character by showing contrasting emotions expressed through their bodies – was something that took years to perfect. The same way these conventions are valid for the images (framing, lighting, movement), sound also has its own language. The first movies had no sound the same way that the first recordings had no images; these were separate mediums and their merging is what truly represents cinema. With the passing of time, filmmakers’ use of sound evolved into different meanings other than plainly representing what is being shown on screen. Sound can heighten or even completely inverse our perception of images. If a movie’s sound just portrays the obvious, its filmmaker is missing out on great opportunities to add depth and meaning to the story.
Cinema can’t exist without images – that’s obvious – the same way it can’t exist without sound. At first this might sound a little daunting since, for a long time, films were silent. Silent, yes, but not completely devoid of sound. Michel Chion is a French music composer and professor of audio-visual relationships at the University of Paris. He is a theoretician that thinks very closely to Walter Murch’s ideas (Chion dedicated his book Film, a Sound Art to Murch and Murch wrote the preface for the translation of Chion’s Audio-Vision); Chion states that “the suggestion of sound was in the air at the time. . . . In the silent cinema, instead of sound for sound, spectators were given an image for a sound: the image of a bell suggests the sound of a bell” (Film, a Sound Art, 5), and even the exaggerated gestures meant to translate as what the characters were saying so the audience could follow along and not assume all characters are communicating telepathically. In the very early ages, films such as Edwin S. Porter’s would have a commentator by the screen, narrating what was going on. Later on, this person was replaced by intertitle cards that would directly represent dialogue and bind together what would seem disconnected sequences. Even though we call them silent films, the silence of a movie theater has always been filled with music, at first any music would do, then there was music composed to be played specifically with the movie, and after that, there would be a kind of sound effect layer added to certain elements on the screen that could be represented by musical instruments for example cannon fire cued a drum in the theatre (Film, a Sound Art, “When Film was Deaf”).
Seeing this fundamental connection, several people tried to blend both things. Thomas Edison had both moving-picture and sound apparatuses by the end of the 17th century. The Phonograph was a device that could reproduce sounds recorded on a wax cylinder. The Kinetograph was Edison’s moving-picture camera and it came associated with the Kinetoscope, a one-man booth where the user could watch a short clip projected in a loop. The main person behind the Kinetoscope research and development was William Kennedy Laurie Dickinson, one of Edison’s employees. The first record of sound and image recorded together is Dickinson Experimental Sound Film: a clip, shorter than 20 seconds, of a man playing a violin in front of a recording Phonograph while two other men dance in front of the camera. These two elements (image and sound) were not synchronized until recently, by no one less than Walter Murch. “It was very moving, when the sound finally fell into synch: the scratchiness of the image and the sound dissolved away and you felt the immediate presence of these young men playing around with a fast-emerging technology.” (Murch, Filmsound.org)
The main difference between the results achieved by Murch and what the audience saw and heard at the Kinetophones (the resulting mix of the Kinetoscope and the Phonograph) back in the early 1900s was that the Kinetophones had no synchronizing mechanism. The images would play on their own, and so would sound, each of them from a different platform, so getting the starting point to match precisely was nearly, if not completely, impossible. It wouldn’t be for another two decades until synch could be completely achieved, by Lee De Forest, an American inventor who managed to record sound on an optical track directly on film. All previous attempts relied on having separate devices – film for the image and discs, cylinders or tubes for the sound – and the main problem hinged on starting at the same time and remaining in sync for the duration of the show. The transition from silent films into talking pictures is wonderfully depicted in Singing in the Rain, a 1952 film directed by Stanley Donen and Gene Kelly. In one of the scenes, the cast and crew is watching the premiere of the film along with the audience and we get plenty of examples of all the awkwardness that was introduced by having synchronized sound, including one of the most memorable examples of how out-of-sync sound and image might change completely the meaning of a scene.
Live-recorded sound didn’t have its place secured under the sun until late 1927, with the release of The Jazz Singer. Until then, talking pictures were still considered a fad and expected to bore the audience soon enough. The difference between other sound films and The Jazz Singer is that the latter had not only music accompanying the film, but also two very brief excerpts of live-recorded audio in sync with the film. That drove audiences wild and the movie’s box-office through the roof. From that moment on, for purely economic reasons, talking pictures were here to stay. The Warner Bros hit was sounded with the use of the Vitaphone – which was still a dual system: the sound played from a disc, not straight from the film. According to Michel Chion, the main problem that came with the definitive establishment of sound in movies is “the Vitaphone process was perceived . . . as an improvement, not a revolution” (Film, A Sound Art, 34). The audience’s approval of spoken lines was also a trap for films since speech prevailed as the main element of sound in films for good fifteen years, from 1935 to 1950, as presented by Chion, either through dialogue or voice overs. Filmmakers chose to make films verbocentric as that was the easiest way to please the audience and the producers at the same time, which led to having the sound design aspect of movies not as developed as, for example, the visual aspect (Film, a Sound Art, 73).
From 1927 onwards, the technological aspect of sound recording and playback as well as techniques for capturing, editing and mixing it improved vastly to what we hear at the theatres today. What didn’t change so much was the filmmakers and studios’ perception that
“Sound has an “added value, . . ., [an] expressive and informative value with which a sound enriches a given image so as to create the definite impression, . . . that this information or expression ‘naturally’ comes from what is seen, and is already contained in the image itself. Added value is what gives the (eminently incorrect) impression that sound is unnecessary, that sound merely duplicates a meaning which in reality it brings about, either all on its own or by discrepancies between it and the image.
The phenomenon of added value is especially at work in the case of sound/image synchronism, via the principle of synchresis” (Chion, Audio-Vision, 5)
Synchresis is Chion’s concept of connecting the words “synchronization” and “synthesis”, it is the thought process that binds together an image and a sound which are perceived at the same time (Audio-Vision, 63). For example, if we see a gun firing and we hear the sound of a whistle, we automatically bind these two together without as much as questioning how they are connected. “Synchresis is what makes dubbing, postsynchronization, and sound-effects mixing possible” (63) and this is the ultimate tool to re-signify the role of sound.
In his book about editing, In The Blink of an Eye, Murch talks about the difference between good and bad sound mixes, but his words also apply to the general use of sound: “[i]t depends on . . . how capable the blend of those sounds was of exciting emotions hidden in the hearts of the audience. . . Past a certain point, the more effort you put into wealth of detail, the more you encourage the audience to become spectators rather than participants” (15). When the audience becomes spectators, they stop thinking about the sound’s meanings and take it as a very detailed representation of what their eyes are absorbing, with no unique storytelling features but an empty attempt at immersion (Magalhães, 10). A few examples of this concept of excessive sounds are most action blockbuster released in the past few years, namely, any of Michael Bay’s Transformers movies: our ears are filled with small gears and engines and switching and pumping and nuts bolts to add realism to ludicrous robots, but in the end seeing those sequences without sound would not change the story even a little bit.
Back in 1929, René Clair – a French movie critic – raised the point that “[t]he visual world at the birth of the cinema seemed to hold immeasurably richer promise. . . . However, if imitation of real noises seems limited and disappointing, it is possible that an interpretation of noises may have more of a future in it” (93). It is quite worrisome that this is almost the same issue brought up by Chion in 1994: “[r]evaluating the role of sound in film history and according it its true importance is not purely a critical or historical enterprise. The future of the cinema is at stake. It can be better and livelier if it can learn something valuable from its own past” (Audio-Vision, 142). The challenges are not dictated by technological limitations as they were in the beginning of the 20th century, the challenge now is how to not succumb to the common pitfall of using sound as a mere echo of the image or focusing exclusively on dialogue to explain every event that could have been explained differently. Fortunately there’s hope: the number of movies (or TV shows) where sound plays a role as important as image has been growing and it’s not hard to come up with a few names such as Barton Fink, No Country for Old Men, Breaking Bad, Apocalypse Now and The Conversation. An increasing title count does not solve the matter, though. In order to reduce this gap between sound and image, the teaching of how to make films has to go through some changes, from scriptwriting to directing, from shooting to editing, sound can’t be something that is just used to fill any awkward silence. Proper sound, with meaning, has to be planned and conceived from the earliest stages of a film.
Chion, Michel. “Audio-Vision: Sound on Screen”. New York: Columbia University Press, 1994. Print
Chion, Michel. “Film, a Sound Art”. New York: Columbia University Press, 2009. Print.
Clair, René. “The Art of Sound.” (1929) Film Sound: Theory and Practice. Ed. Elisabeth Weis, John Belton. New York: Columbia University Press, 1985. 92-95. Print.
Dickinson Experimental Sound Film. Dir. William Dickinson. Edison Manufacturing Company. 1895. Web. 13 Mar 2016
Magalhães, Mayara. “O Som Escrito”. São Paulo: Universidade de São Paulo, 2014. Web. 13 Mar 2016.
Murch, Walter. “Dickson Experimental Sound Film 1895”. Filmsound.org (2000). Web. 19 Mar 2016
Murch, Walter. “In the Blink of an Eye”. 2nd Ed. Los Angeles: Silman-James Press, 2001. Print.
Singing in the Rain. Dir. Gene Kelly and Stanley Donen. Perf. Gene Kelly, Donald O’Connor and Debbie Reynolds. MGM. 1952. Film.