Everything sounds bad, and there’s nothing we can do about it

From left to right: The Sandman (Photo: Netflix);  Tenet (Photo: Warner Bros.);  The Dark Knight Rises (Photo: Warner Bros.)

From left to right: Sandman (Photo: Netflix); Tinnitus (Photo: Warner Bros.); The Dark Knight Rises (Photo: Warner Bros.)
Graphics: Rebecca Fasola

Television today is better read than watched—and frankly, we don’t have much of a choice in the matter. Over the past decade, the rise of streaming technology has led to a breakthrough in the use of subtitles. And before we blame the millennials with wax in their ears, A study was conducted Earlier this year it was revealed that 50 percent of TV viewers use subtitles, and 55 percent of those surveyed said it was hard to hear the dialogue on TV. The demographic most likely to use them: Gen Z.

Creating audio issues in Hollywood productions has gotten worse in the streaming era and is exacerbated by the endless variety of consumer audio products. Big scores and explosive sound effects power the dialogue, with mixers having their hands tied to streamer specifications and artist requests. There are very few viewers who can solve the problem without using subtitles. And who can blame him?

“It’s scary,” said Jackie Jones, senior director of Formosa Group, an industry leader in post-production audio. “There’s a lot of time and customer money spent on getting this right. That’s not great to hear.”

Formosa is one of the many post-production houses struggling to maintain a coherent dialogue amid the constant media breakdown. “Every network has different audio levels and specifications,” Jones said of the AV Club On the groom. “Whether it’s Hulu or HBO or CBS. You have to get it to a certain level to make it special. But it’s really out of our control how it airs and how it airs.

After it leaves a place like Formosa, the mixture may go through an additional mixture and another mixture in the steamer, so to speak, by a visible device. Of course, this is the last thing they want in the audio industry. “Dialogue is king,” Sound editor Anthony Venture told us. “I want all conversations to be as clean as possible, so when you hear people struggling to hear these things, you’re upset.” And yet, we still end up with subtitles. If we are just going to read the concordance Sandman On Netflix, why even bother making it?

The oldest game | Sandman | Netflix Philippines

“Everybody’s very upset about it,” said David Bondelwich, an associate professor of music and entertainment studies at the University of Denver. “We work very hard in the industry to make every part of the conversation understandable. If the audience doesn’t understand the conversation, they won’t follow any further.”

Streamers and instruments together make awesome music

With all this technology at our fingertips, conversations have never been more interactive, and the proliferation of streaming services has made the landscape impossible to navigate. Despite the different products people watch media on, no two streamers are the same. Each may have different needs for post-production housing.

As far as streamers go, editors say Netflix is ​​the best for good sound Even released their audio specifications publicly, but the service is an outlier. “They’ve put a lot of money into building their standards, while some of the other streamers seem to have pulled them out of their asses,” Bondelwich said. “With some of these streamers, editors get like 200 pages of specifications that [they] Have to sit there and read them to make sure they don’t violate anything.

Not all streamers are so forgiving. “I was at lunch with a couple of friends recently, and they were at lunch answering emails because they did the mix, they finished the mix, and everybody’s happy,” Venture said. ” “And then the director became like a screener or was able to watch it at home, you know, whatever streaming service he was using. And he was like, ‘Hey, this looks completely different.

Neil and Protagonist meeting | Neil Intro Scene in UHD IMAX 4K 60FPS X265 10BIT HDR

Today, sound designers usually create two mixes for a film. The first is for Theatrical, assuming that the film is released theatrically. Another is called “near-field mixing,” which has less dynamic range (the difference between loud and quiet parts of the mix), making it more suitable for home speakers. But just because the mixes are getting better doesn’t mean we’ll be able to hear them.

“‘Near field’ means you’re closer to the speakers, like you would be in your living room,” said Brian Vissa, executive director of digital audio mastering at Sony Pictures. “It just has the speaker closest to you so that what you hear is what’s coming from the speakers themselves and not what’s being contributed by the room. And you hear it at a quieter level than you hear in a movie theater.”

“What a near-field mix is ​​really about is bringing your container to a place where you can comfortably listen in the living room and get all the information you’re getting, the stuff that’s actually on the program. was placed in. It might otherwise disappear.”

Visa wrote a white paper on near-field mixing, creating an industry standard. He believes that a large part of the problem is “psychoacoustic,” meaning that we simply don’t hear the same sound in the home as in the theater, so if a good near-field mix isn’t in place, the audience is left out. becomes To protect yourself.

Complicated matters, where things end up are never more fluid. “On TV we broadcast the dialogue so it’s always equal and clear and builds everything around it,” said Andy Hay, who delivered the first Dolby Atmos project to Netflix and helped develop the service’s standards. “On features we let the story drive our decisions. A particularly dynamic theatrical mix can be very challenging to embrace in a close-field mix. However, most productions are immersed in staging after the film is completed. , audio engineers may not even know what format they are mixing for.

And there is housing to deal with. Consumer electronics give users a number of proprietary options that “reduce loud noises” or “enhance dialogue.” Sometimes they simply have silly marketing names like “VRX” or “TruVol”, but they are “motion smoothing” for sound. Those options, which may or may not be enabled by default by the manufacturer, attempt to respond to noise in real time, usually trying to capture and “reduce” loud sounds such as explosions or musical cues, As it happens. Unfortunately, they are usually delayed and end up reducing what is underneath the noise.

This is not just a speaker problem. The room, placement of appliances, and white noise created by fans and air conditioners can all make it difficult to hear a conversation. Near field mixing should account for this as well. “I listen very carefully and very quietly, because that way all the other factors, the air conditioner, the noise next door, all the other things that can be thrown around and things become important. And if I miss something Give it, we have to bring it.

A long way to bad sound

The audio issues we experience today are the result of decades of diminishing the importance of clear audio in productions. Bondelovich points to shooting motion on sound stages with theater actors as the first nail in the coffin. Sound stages provide a separate space for picking up clear dialogue, usually with a standard boom mic “eight feet above the performers.” The popularity of location shooting made this impossible, leading to the standardization of radio mics in the 90s and 2000s, which presented their own problems. For example, clothing rustling is difficult to edit and causes a lot of ADR, which actors and directors alike hate because it detracts from the performance on set.

In the early days of cinema, when most actors were trained for the stage in the theater, the actors would project towards the microphone. However, the way it was done allowed for a lot of shouting and misdirection in the name of realism. This can be managed if more time is put into rehearsals, where the actors can practice the volume and clarity of their lines, but very few productions have this luxury.

One name being floated by sound editors for this change is Christopher Nolan, who popularized the evolving style of acting through the Batman films. The problem remained constant throughout his Dark Knight trilogy, with Batman and Bane’s voices being two constant complaints even among fans of the films. When Ben’s voice was completely ADRed after the disastrous IMAX viewing of the film, it took over the rest of the film. “The worst was the mix The Dark Knight Rises” he said. “The studio realized no one could understand it so at the last minute they re-mixed it and they literally made it painfully loud. But volume was not the problem. [Tom Hardy’s] He speaks through a mask, and he has a British accent. Nothing was fixed by shouting out loud. It just made the movie less enjoyable to sit through.”

The Dark Knight Rises Bane Undubbed Soundtrack.

Volume is an ongoing battle not only among sound editors but within the government. In 2010, the Federal Communications Commission passed the Commercial Advertising Loudness Reduction (CALM) Act to reduce the volume of advertising. Instead, the networks simply turned up the volume of TV programs and reduced the dynamic range, making it harder to hear conversations. “They try to push things so much that they get louder,” said Clint Smith, an assistant professor of sound design at the School of Filmmaking at the University of North Carolina School of the Arts, who previously worked as a sound editor. Skywalker Ranch.

Smith has been teaching audio engineering for five years and encourages his students to take and work with subtitles so that they can work in the film’s narrative in more creative ways. “What does that look like? Ten years down the road, 20 years down the road where subtitles become more common because I don’t see them as going away,” Clint asked his students. “I was kind of curious about that… how subtitles can really be part of the filmmaking process. Don’t try to escape them.”

As unintelligible dialogue becomes more common, we will have no choice but to accept subtitles. But where are the studios and streamers not even bothering to mix the audio properly and assuming that the audience will just read the dialogue? With subtitles being an option for every streamer, soon, “We’ll fix it in post” can become “They’ll fix it at home.”

You can feel the sound

There are some things we can do. For example, always buy a good sound system. Even more important is setting it up correctly. Many Sound Mixer interviewees recommended professional help but also noted that many soundbars today come with microphones for home improvement. No one was too convinced by the sound bars, though.

“If you’re going to use a soundbar,” Bondelwich said, “get the best soundbar you can. And if you’re listening in your earbuds or headphones, get good headphones. If it’s a noisy environment, over-the-ear. Buy headphones. They really isolate the sound very well and don’t use noise canceling headphones because it really degrades the audio quality.

But more than anything, they emphasized how this is a selling factor for movie theaters. If you want good sound, there is a place that “sounds you can feel.”

“It’s a bummer because you want a theatrical experience,” Wancher said. “People don’t go to theaters as much nowadays because everything is just streamed. And that’s how you want people to hear this stuff. You do it so you can hear it loud and loud.”

Source

Also Read :  Shonda Rhimes, other creators unhappy with Netflix's new mid-video ads

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button