Follow TV Tropes

Following

Lip Lock

Go To

If you were looking for the kissing trope, see Lip-Lock Sun-Block.

"If a guy says, "my mother", he's gonna have two closed-mouth sounds, although if you're saying it in Chinese, it's going to be totally different. So, this is the thing you had to get around by substituting various words, and the Ms, Bs, and Ps, the closed-mouth sounds, were always difficult."
Ted Thomas, arguably the founder of dubbed foreign films in Hong Kong.note 

When something gets dubbed into a language it wasn't originally in, that's when the trouble starts for this trope. The actors have to lipsync along with the old footage, which is tricky if they don't want to turn it into a Hong Kong Dub, because you wouldn't like to hear someone talking when their mouth isn't moving. Contrary to popular belief, the people who need to deal with lip lock are not the voice actors (who only act out what is written in the script) but the people who translate/write the dub script (sometimes the translator and the script writer are the same person, sometimes not). Script writers usually read lines out loud while writing, to make sure that they fit the Mouth Flaps. Sometimes the voice actors have to deliver the lines in a way that doesn't sound natural to them, like when they have to talk slow or very fast, because for example, a line will be urgent and indicate urgency, but there's not enough mouth movement to say something very quickly, and that can be a problem. Because of this, an adaptor thus has to translate from the original version and make it make sense in the language it's dubbed while also make it fit into the mouth movement before the script is released for the voice actors, so if a line is too short, something in the script must be changed to fit the mouth movement, or at times, it's possible that the actor's performance in a line might be a little bit too short but can be digitally edited to lenghten it so that it fits the mouth flap.

If the script writer doesn't pay attention to match the lip flaps, the result is that the new actors are forced to speak at strange tempos that don't feel natural in order to better fit the lipflap. The main ways in which this is manifest are:

In anime, the Japanese studios create the animation first, and then record the voices. This means that characters' mouth often just moves up and down; however, the larger the animation budget, the more effort animation studios make to make the lip flaps match the dialogue. (Honey and Clover is a good example with mouth flaps that match the lines perfectly.) With Western cartoons, the voices are recorded first and the animation is built around them. This means the mouths move in a manner much more consistent with the dialogue, but at the cost of making it more difficult to translate into another language. This difference can be very clearly seen in the English dub of AKIRA, a Japanese animated movie which, unusually, recorded the voices before the animation and took pains to make the mouths match the dialogue; the result is that the English version looks distinctly off. Ironically, live-action dub scripts are easier to write because the natural movements of the mouth while speaking are a lot more vague than in cartoons.

Due to the nature of dubs, no script writer can avoid the curse of Lip Lock. A skilled writer can make it a lot less noticeable, but can't do away with it entirely.

There are ways of avoiding it, but all have their own disadvantages:

  • Editing the footage/reanimating the characters' mouths to match the new dialogue. The latter is very expensive in animation so it's very rarely used, and simply editing it can result in conspicuous cuts, especially with a busy background. It's also never used in live-action dubbing, because it's well-nigh impossible, and using AI to do so at this point in time will inevitably result in Unintentional Uncanny Valley. In video games this depends on how the game handles its animation. If these are handled by the game's programming directly, it can be relatively easy, since all you need to do is edit the facial animation instructions (and the timing of events in a scene in-case the translation is shorter or longer than the original line). However, if the animation is done by manual 3D animation, hand-drawn 2D animation or even motion capture, then it's effectively just like redoing any other animated production. Mouth flap editing is also typically frowned upon in the dubbing industry, in part because it can compromise the artistic integrity of a work for multiple reasons.note  In fact, it is considered so taboo that most anime rights-holders explicitly forbid the use of it when licensing out dubs. The practice is, however, used in abridged series like Team Four Star's Dragon Ball Z Abridged.
  • The writers getting creative and editing the translation so that the lines fit the Mouth Flaps better. This is pretty standard practice, to the point where some studios hire translators who just directly translate what is being said, and writers to adapt said dialogue in order to match the character's lips. The degree to which it's done varies but most try to find a compromise to match the lip flaps but also get the meaning across... Unless they are short on time or just don't care. That being said, if the translators are skilled enough, they'll take these situations as opportunities to add some Woolseyism to elevate their version.
    • Note though that this is an unavoidable product of every translation, be it dub or subtitle. The only exceptions are voiceovers, and only to a certain extent.
  • Filming for Easy Dub: when there's no lipflap to mouth to, say the character is talking off-screen or standing with his back to the camera, the script writer's work is a lot easier. They can use lines that wouldn't match if the character's mouth was visible, and timing is much less of an issue, as the new audio can go on for longer than the character speaks for in the original, and the audience wouldn't know. Animaters Forcing a dialogue off-screen, however, isn't really doable; even if it can be done, it screams of cost-cutting, despite the intention, and 4Kids and 1990s American anime dubs can once again demonstrate that it doesn't always work, and why you shouldn't add dialogue in these scenes just because you can.
    • A related and largely more effective method is when the character has either No Mouth or a mask that entirely covers it up, so the translator just needs to match the length of time and body language. (Lord Zedd from Power Rangers is an example of this, and was voiced by a scriptwriter. Deadpool and Darth Vader are others)
    • Also often used in movie dubs to great efficiency when the camera is focused on something written. Instead of just subtitling it, sometimes the line will be read out loud by a character off-screen, when it makes sense in context. If used well, you won't even notice the voicing of line wasn't in the original version.
  • Completely ignore the issue, in hopes that not constraining to the lip flaps allows for more natural-sounding dialogue, and that the benefits of this outweigh the obvious downsides of ugly visible de-sync. Also, it has the advantage of being extremely easy and cheap.
  • Subtitles. Although these also have their own share of challenges that need to be accounted for, like making subtitles with a length, duration and faithfulness that allows most audiences to read them as easily as possible so they miss as little visual information as possible. Something that can be more complicated than a lot of people give credit for.


Examples:

    open/close all folders 

    Anime 
  • 4Kids Entertainment's tendency to do this is parodied mercilessly in this Gag Dub of Higurashi: When They Cry, along with the Macekre'd dialogue and premise.
    Keiichi/Casey: GET out - my way! I'm - going - to - a - FEE-ey-staaaa!
    Luffy: You and your NAVY...are ruiningCoby'slifelongDREAM!
  • Besides the normal edits to the dialog necessary for timing, the North American dub of the Animated Adaptation of Ranma ½ used an audio editing system called WordFitnote  to match the dialogue to the mouth-flaps.
  • The 1986 movie dub of Fist of the North Star suffered from this a lot, though not as much as some of the other titles.
    Raoh: See... It's different now... I'm a king... and a king... must demand respect from everyone.
  • The dub of Bobobo-bo Bo-bobo actually engages in some Lampshade Hanging regarding this. In episode 53, Bobobo states "Now I'm going to tell all of ya where we're...going. I just hope by the time we arrive I can speak without weird pauses."
  • As mentioned above, The Ocean Group dub of Dragon Ball Z has quite a few examples, the first being the (in)famous scene where Vegeta, voiced by Brian Drummond, is asked by Nappa what his scouter says about Goku's growing power level, at which point he takes the scouter off and growls "It's over nine thousa-aaaaaand!" before crushing it in his hands. This has since become an internet meme. Another instance of this elongated delivery is when Vegeta has Gohan by the scruff of his neck and says "I'm going to crush you like a grape in the palm of my hand, you understa-aaaaaand?!" in an especially raspy tone.
    • Also parodied in the World Tournament arc of DBZ with the (inaccurate) reenactment of the Cell Games, where the lip flaps are deliberately completely mismatched.
    • Dragon Ball Z Kai drifts in and out of this trope, and while it will generally match the lip flaps, it will break from them for more natural dialogue and better performances. One of the reasons why Frieza got recast from Linda Young to Chris Ayres is that Linda was deemed to not be able to speak fast enough to match the lip flaps.
    • Dragon Ball Super runs into some lip flap issues, too. The translation takes more liberties than Kai, but is still relatively faithful. Usually it's fine, but it does lead into some lines being said in a very hurried fashion, or some being rewritten so that they take longer.
  • In Gankutsuou, this actually resulted in the somewhat trite "Wait and hope!" of the original The Count of Monte Cristo being rendered into a memorable catchphrase uttered at the end of each On the Next Week's Episode teaser: "Bide your time, and hold out hope!"
  • The dub of Death Note had the very Narm-inducing line (which was also both of the original creators' favorite part in the series): L whispering "I-wanted-to-tell-you, I'm L!", translated from a considerably shorter Japanese sentence, had the former line been spoken at a normal pace. Thankfully, Alessendro Juliani made it sound creepily intimate.
    • The original line was "Watashi wa Eru desu", meaning: "I am L". The issue here was if it had been translated straight, there would have been about five extra syllables and mouth flaps left over, so they had to add something to get it to fit.
  • The dubbing of Transformers: Energon was notably bad about this; whenever there was an additional syllable needed, the dub had the characters say various things that sound like they were made up on the spot, causing a constant stream of "what?", "uh?", etc. (or as tfwiki.net calls it, "The Pain Count"). Transformers: Cybertron suffered less from this and had a better dub script overall.
  • Zatch Bell! suffered from this immensely during the musical numbers, in which the dubbers would insist on having the VAs sing along to the lip flaps at the expense of any sense of harmony and timing.
  • In Jo Jos Bizarre Adventure Stardust Crusaders, when Mohammed Avdol returns from what seemed like death, we have this exchange in the original Japanese version.
    Jean Pierre Polnareff: Mohammed Avdol!
    Mohammed Avdol: Yes, I am!
    • Due to the enunciation of every transliterated syllable in both Avdol's name (モハメド・アヴドゥル, or "Mohamedo Avuduru") and Avdol's response (イエス、アイ アム! or "Iesu, ai amu!"), the dub did this:
      Polnareff: Are you really Mohammed Avdol?
      Avdol: YES, youhadbetterbeLIEVE I am!
  • The English dub of Revolutionary Girl Utena was a hallmark offender. Not only did the characters often have long, awkward pauses between most of their lines, occasionally rushed and abnormal dialogue, and flat-out not having the words sync up properly, but there are numerous instances (most notably, whenever the Student Council rides the elevator to the top of the school) where entire lines of dialogue are just cut-and-pasted from episode to episode (though given the Stock Footage nature of the series, this actually somewhat played into the theme). They did manage to improve somewhat for the movie.
  • Spider Riders appears to feature this in spades, to the point where it takes several full episodes to get over the fact that most of the characters come off as having serious mental illnesses. Luckily, it seems like the actors (or the sound editors) get better and better as the series rolls on, so gratuitous pauses grow more and more rare. Strangely, some characters seem almost entirely exempt from this throughout the show.
    • "Will you be...the Inner World's savior or...its destruction?" The line is awkward enough with the strange pauses without being so horribly acted.
  • Heroic Age has a rather hilarious example in the third episode when Age says that he likes to paint then enunciates it, so the dub has to act like "paint" has three syllables ("pa-ain-tu").
  • Digimon Adventure usually uses only one or two voice actors to say something in crowd shots; the rest of the scene is completely silent. In the Saban dub, on the other hand, they usually got a handful of voice actors for those scenes, which resulted in the crowd scenes sounding more natural, although harder to hear the "important" facts.
  • The infamous Mega Man's death scene from MegaMan NT Warrior. As dying, Rockman originally had, as final words, "Ne...tto...kun...". As far as adapting goes, this was a tricky line for the dubbers. First, because Netto's English name is the monosyllabic "Lan", and second, because the lips were carefully animated in said scene. The dub opted for the "De...le...ted...", which just seems random and loses most of the emotion.
  • Pokémon: The Series got it easy. Since Pokemon only say their names or unintelligible noises (generally the former in the dub), all the dubbers had to do was play with how the Mons used the syllables of their names. Of course, some of them have the same name (Pikachu being the best-known example), which means no dubbing of their lines has to be done at all.
    • Notably, the 4Kids dub didn't make much of an attempt to make the lips match, which was even lampshaded in a dub-only scene of one episode. There WERE a few renamings here and there (Lorelei in the games became Prima in the dub) which were mostly done to avoid making a couple lines too much longer than the original.
    • Ash stands out as he's one of the very few characters who uses a surname regularly. His Japanese name is just "Satoshi" so he was given a surname in the dub because "Ash" is too short on its own.
    • A particularly Narm-ful example shows up early in the series, when Ash's response to all the Joys looking alike is "Yeah, it's a Joy-ful... world" but the pause, combined with the actress' unenthusiastic delivery, makes it sound like he's doing a Who Writes This Crap?! in his head.
    • The original Team Rocket motto was gradually modified as the lip flaps became tighter and faster, inspiring a running gag where the first two lines ("Prepare for trouble... / Make it double...") were extended in a different way every episode.
  • Speaking of 4Kids, Yu-Gi-Oh! sometimes had a hard time with one of its renamings: Jounouchi to Joey. Usually it wasn't a problem, since the dub was more of a rewrite than a translation, but whenever Yugi said "Jounouchi-kun" isolated (which happened a lot), 4Kids had to think of new, inventive ways of filling the flaps. Sometimes it worked, sometimes it didn't ("Be careful, Joey!").
  • 4Kids's Sonic X dub had a little problem with the term "Chaos Control" — in Japanese, its pronunciation is slightly longer (Kaosu Kontorōru), so they always needed to add something to it. When characters used it, it became "Chaos Control, now!" — a little awkward, but works fine. When it was mentioned in conversation, however, things could get a little weirder (check Sonic X's Dub Induced Plothole entry for more information).
  • Hetalia: Axis Powers has some problems with this. Specifically the very first scene in the very first episode, which consists of tons of characters all trying to have their Establishing Character Moment at the same time. No, they're literally all talking at the same time. Needless to say, Hilarity Ensues. Can you keep up?
  • Hungarian voice actress Melinda Major, who had voiced a character in Nana, cited this as her reason for quitting anime dubs. Trying to emote to anime mouth movements (or in her words, "triangles that sometimes move") was simply way too frustrating for her. She loves dubbing western animation, though.
  • In the English dub of Children Who Chase Lost Voices, "kisu," the Japanese-ised version of "kiss," is replaced with... itself. Asuna's English actress says "kisu" because the "oo" part of her mouth movements was obvious enough that "kiss" would have looked weird.
  • This is presumably the reason why the Tower of God anime's English dub has more wordy lines of dialogue than the English subtitles have while often adding nothing informative and sometimes straying further from the source material.

    Films — Animation 
  • Although Disney's dubs of Studio Ghibli films tend to be quite good about this, Castle in the Sky has James Van Der Beek's dialogue as Pazu not matching up with his vocals.
  • Cult classic AKIRA has had its fair share of this in both dubs, due to the characters having their lip movements animated specifically for the original Japanese audio. Most notable is the scene where Kaneda and Tetsuo are about to fight, in which the 1989 dub has Kaneda say in a rushed tone "OkayletssettlethisonceandforALL!" after Tetsuo yells his name. The 2001 dub changed this to "That's Mr. Kaneda to you, punk!" which fit the context and the mouth movements better and sounded more natural.
  • Can be noticed a few times in the otherwise excellent English dub of Sky Blue, where Korean sounds don't match well with their English equivalents.
  • Final Fantasy VII: Advent Children has this because it's all CGI with accurate mouth movements for Japanese. The original version repeatedly uses the Japanese onomatopœia "zuruzuru", which mimics the sound of dragging a heavy load. Since the entire film is about Cloud letting his guilt and feelings of powerlessness weigh him down ("I feel... lighter"), it makes sense in context. The English dub replaces it with the rather confusing "dilly dally shilly shally." "Dilly dally" means to dawdle, while "shilly shally" is presumably a mocking Word, Schmord!.
  • Zootopia was retitled Zootropolis in Europe. For the UK version, the actors re-recorded lines with the city's alternative name, but the characters' mouth movements remain the same.
  • The UK English dub of The Big Bad Fox and Other Tales has faced some criticism for this, with some of the dialogue sounding somewhat rushed to match the pace of the original French audio.
    • For example, in one scene where Fox tries to scare a chicken in an attempt to eat her and gets scolded by her for it, in the French version, she bluntly rebukes his excuse of being hungry with "I don't care", while in English, she says "Go back to the woods."

    Films — Live-Action 
  • The English dub of Godzilla Raids Again (as Gigantis the Fire Monster ) went to extreme lengths to make the English dialogue match the mouth movements of the Japanese actors, which has the unfortunate side effect of making the actual content of the dialogue almost incomprehensible.
    • An example of which was a Japanese word translating to 'stupid fool'. As the lips still had to be in sync, it was replaced by 'banana oil', roaring-'20s slang for talking insincerely or speaking nonsense, which makes for a very nonsensical insult.
    • And Mothra vs. Godzilla involves the great line "Yeswealwayskeepourpromises." Apparently, the equivalent Japanese word is really really short.
      • Titra/Titan's dubbing in general goes great lengths to make the lip sync match while somehow still sounding natural compared to Gigantis.
  • A rare example in an English film, Pazuzu speaking through Regan in The Exorcist was done by Mercedes McCambridge in post-production rather than Regan's actress Linda Blair, due to the former having a deeper, androgynous, and more demonic voice. As such, there are moments when the voice doesn't match up with Blair's lip movements.
  • Italian productions are often filmed in English, with American or British lead actors. Sometimes, the supporting actors deliver their lines in Italian, with the English dubbed in later. This is much less noticeable than if all of the dialogue is dubbed, but it can lead to awkward situations such as in Suspiria (1977), which includes a scene with an American, an Italian, and a German, each speaking in their own native language.
    • Barbara Steele complained that the production company's policy of dubbing all the voices meant that her own dialogue was dubbed over in Mario Bava's Black Sunday, even though she was speaking English.
  • The 1989 version of Cinderella, Aschenputtel, if you could not tell by the name was German, and when dubbed into English, did not match the mouth movements at all.
  • Another rare English example occurred in Mallrats, where some scenes had to be re-dubbed to eliminate references to a dropped plot thread; some more ADR was also enacted for the version seen on free-to-air and basic cable TV.
  • Roger Ebert had tremendous fun pointing out how dubbing the French film Little Indian, Big City into English resulted in ridiculous, backward-constructed sentences and the passive voice popping up at random.
    "You have a son—you hear?"
  • During post-production on Star Trek: The Motion Picture, it was decided to avert Aliens Speaking English in Spock's introductory scene on Vulcan. Alien dialogue was created to fit the English lip movements, while the subtitles were rephrased to make the dubbing less obvious. A similar technique was used for a conversation between Spock and Saavik in Star Trek II: The Wrath of Khan.

    Live-Action TV 
  • Brazilian dubbers highlight two kinds of works as the most difficult to do this: anime, for reasons the folder above should make clear; and Mexican telenovelas, as not only Spanish is too similar to Portuguese, but everyone acts so exaggerated that their mouths become particularly hard to sync.
  • Happens from time to time in Power Rangers, given they're adapting a Japanese series. Typically it happens via Head Bob (where what they're saying doesn't quite match the nodding in the helmets). For the entire first season of Mighty Morphin' Power Rangers, this happened to Rita and other various villains, as they only had the Japanese footage to dub for them (new actors wouldn't be hired to portray them until season 2).

    Video Games 
  • Calling is an extremely chronic offender. At the end of the opening cutscene when Rin answers her phone, she clearly mouths "Moshi moshi,", but in the dub, it was changed to a drawn out "Hellloooo?" Half of what the characters say don't even sync up with their mouth flaps.
  • Devil May Cry 5: Surprisingly for a Capcom game, especially one that's released while Street Fighter V was still making the rounds, the cutscenes in this game are animated for the English dub, making the Japanese voice actors the ones to get lip locked this time around.
  • Final Fantasy X occasionally suffered from this because of the fact that a fixed, "rhubarb-rhubarb" mouthflap loop was being used. Ignoring serendipitous cheats like Auron's face-obscuring collar letting all his lines sound natural and smooth, Yuna, in particular, was injured badly - her voice was already soft and shy. Thankfully, the sequel smartened up the lip sync.
    • But the sequel only re-synced the lip flaps on certain shots of certain scenes. 90% of the time, you're watching the "rhubarb-rhubarb" flaps (and more worryingly is the glazed expression on their faces while doing it). However, Yuna's voice actress still spends less time trying to fit the mouth flaps exactly.
    • FFX actually uses technology to speed up the voice clips if they're a little too long. Most of the time this isn't really noticeable, but if I quote the words "WithYunabymyside" I'm sure someone will recognize it.
    • It's also blatantly obvious when in the original Japanese script a character - usually Yuna - just said "hai", because it's usually replaced with a quick/mangled "okay" (or, at least once for Tidus, "oi"). This is presumably because the "s" at the end of "yes" is a fricative, but makes for some awkward scenes. note 
  • Since Dirge of Cerberus uses the same cutscene engine from Final Fantasy X, this was inevitable. There are many examples, but a particularly Narm-ish one comes to mind - when in the Japanese version Vincent, witnessing Azul's demon form, says "Nanda", in the English version he gives us "Whatthehell?"
  • One pivotal scene in Final Fantasy XII is rather derailed by the ringing declaration that a certain individual "Is not thetypetotakeBASEREVENGE!"
  • (Almost all) Early PlayStation games with voiced cutscenes suffered from this. With No Budget or space to modify the scenes, and no budget to hire experienced voice actors, lip lock was either ignored (leading to a Hong Kong Dub) or the delivery was completely ruined. Examples include Zero's "WhatamIfightingFOOOOOOOOOOOOR!" in Mega Man X4 and Xenogears (all of it.)
    • To give you some idea of how bad this was, the bar was set so low that the above-mentioned X4 was generally considered to have unusually good voice acting for a video game when it came out.
  • This otherwise surprisingly good Kingdom Hearts Fan Dub suffers from this at a few places.
    Ven: We're friends. Therefore, I wanted to ask you...something.
  • In Dynasty Warriors as well,
  • Jeanned Arc contains quite a few anime sequences, but nearly all the text sounds ridiculously rushed. Even worse than usual because most characters speak in various strengths of French accents.
  • In Ghostbusters: The Video Game, none of the characters' mouths ever sync with what they're saying. Considering that they managed to get almost the entire cast of the movies involved in the voice acting, it's really disappointing that the animators couldn't have done a better job.
  • Dissidia Final Fantasy suffers heavily from this. Compare everyone's dialogue in the cutscenes to the pre- and post- battle lines, where there is no lip sync to deal with. Every other cutscene has the characters talking with random punctuation ("I mustn't ruin. Everybody's hopes.") making some of the dialogue sound uncomfortably awkward. Cloud gets hit particularly badly; it ends up with him stopping mid sentence several times with very obvious pauses, and practically everything he says is a variant on "I just... (rest of sentence)." Considering Advent Children had much more complex lip sync, it's surprising how bad Steve Burton sounds at some points in comparison. Believe it or not, this is also a problem with some scenes in the Japanese dub due to how the mouths are animated—the scene with Cosmos and Warrior of Light in Ultimecia's Castle is a good example.
    They did better with certain characters: Garland, Golbez, Exdeath, Gabranth, and Dark Knight Cecil all have closed helms, making it easier to make good-sounding sentences, due to the lack of lips to sync. Though there's still the scene with Dark Knight Cecil saying, "I must. (Minute-long pause.) Do this."
  • The dubbed dialogue in Kingdom Hearts games that have pre-rendered cutscenes tends to feel awkward, compared to the games with the usual game-rendered ones, since redoing the lip-syncing for the former is more expensive. And yet, they did it in some rare instances of 358/2 Days anyway: Namely, when DiZ says "She?" (referring to Xion), which in Japanese had the three-syllables lip moves of "Kanojo?" For Kingdom Hearts II they didn't bother to change the lip movements for the flashbacks to the first game.
  • The English dub of Sakura Wars: So Long, My Love makes no attempt to match the voices to the Mouth Flaps outside of animated cutscenes. This makes the dialogue more natural at the expense of agreement between the visuals and spoken dialogue.
  • Digital Devil Saga. Even though the English voice actors are very experienced in voicing foreign animation, the dialogue has as much awkward pauses and speed variations as other examples in this page; the care put into the lip matching with the Japanese dialogue certainly doesn't help. It's less noticeable when the characters are in their demon forms, but the rest of the time.... yeah.
  • Onimusha, oh so, so much. Especially in Dawn of Dreams.
  • Infinite Undiscovery suffered from this trope. There are multiple scenes where characters are talking to each other, yet their lips rarely, in some cases never, move at all, giving the impression that everyone's either speaking telepathically or skilled ventriloquists.
  • Starcraft and Warcraft III have "cinematics" (non pre-rendered in-game cutscenes) that suffer from this. While the in-game models you're looking at have no speech animations, the "portraits" of the characters displayed at the bottom of the screen do. Due to engine limitations, the talking animations are all prerecorded; the animation and the audio playback are activated by separate triggers. And each character has only one speaking animation each. You'll see the same generic lip movements from the characters if they're delivering highly emotional dialog, or if they're making funny pop culture references because you clicked them too many times.
  • Starcraft II, surprisingly (because, at the time, this wasn't considered too big a deal), avoids this due to its real-time lip-syncing system (similar to the ones found in most modern RPGs). Even the protoss (who lack lips, and indeed mouths in general) have unique animations for each of their lines; it would not be an exaggeration to say each unit portrait in Starcraft II has more animation and attention to detail than a unit model in Warcraft III.
  • Star Fox 64. Perhaps it's due to hardware limitations or Nintendo just being Nintendo, but the egregious mouth flapping of the communicating characters don't come remotely close to syncing with either the Japanese or English languages.
  • Sly Cooper, being one of the very rare instances where the series is fully translated into loads and loads of other languages, would inevitably run into this problem. Just play any of the games in any non-English mode and you're guaranteed to find at least one or two sentences that last 10 seconds shorter or longer than the original dialogue.
  • Sonic the Hedgehog (2006) has a major failure with the lip syncing. While Sonic Adventure and Sonic Adventure 2 suffered from some pretty bad Lip Lock as well, they'd knuckled down and improved the voice acting by the time Sonic Heroes came along. Then game Sonic 2006, which makes it apparent just how little comedy control the game had. Characters that speak either talk while their mouths don't move at all or have their mouths move before they even start to speak.
  • Cross Ange: Thanks to the rather primitive CGI used for its animations, the characters just flap their lips with zero attention paid to actually syncing them to the dialogue.
  • Yakuza was originally dubbed in English, and they attempted to match the dialogue with the characters' mouth movements. Keyword: attempted. This brought with it the problems of bizarrely-spaced and enunciated dialogue ("Go! Kill this arrogant mo-ther-fuck-ker!"), among other things. When the game received a Video Game Remake ten years later, there was no English dub in sight, and one wouldn't be offered in the Yakuza series again until Yakuza: Like a Dragon, fifteen years on (which put in extra work to avoid this trope entirely).
  • Fist of the North Star: Lost Paradise: Kenshiro's iconic catchphrase is "Omae wa mou shindeiru" - "You're already dead." For the English dub, the line is expanded to account for the extra syllables: "It's too late for you; you're already dead."

    Western Animation 
  • For varying reasons (Limited and Recycled Animation, faces that don't re-draw well without a full reanimation, the possibility that the animations were done before the voice lines were recorded, etc.) Puppy in My Pocket: Adventures in Pocketville has absolutely horrific syncing. Even in the Italian dub (funnily enough, although this show comes from Italy, it happened to have English voice tracks first and the Italian one after, but even there the lip-syncing still does not match up). Characters often express words or even full sentences with barely any facial movement. The show's villainous kitten is a particular noteworthy example, as she can apparently deliver virtually all of her lines with a clenched jaw.
  • The earliest Popeye cartoons at times appeared to be animated before being voiced (the opposite of what most cartoons do, and what the series itself would do later). As a result, the actor would occasionally speak with strange phrases or phrases said in strange ways to fit the lip animations. More often, however, they ignored this completely, leading to the famous practice of having the characters continue to talk (usually with aside comments or under their breath) when their lips are obviously not moving.
  • Miraculous Ladybug reversed it oddly: the lip syncing was done to the English voice tracks, despite the series being produced in France, so French viewers are the ones who see the out-of-syncness.
  • Despite being a British show, the CGI era of Thomas & Friends is recorded from the American dub with the UK dub produced afterward. The problem is that certain words don't translate well with the lip flaps (particularly if the words have more or less syllables), like "Sir Topham Hatt" to "The Fat Controller". The only way they can work this around is if they omit "The" (e.g. Jumping Jobi Wood), they speak very fast, or they're offscreen.
  • Winx Club has this bad across all the dubs. Characters will go off on random tangents mid-sentence to match the Italian enunciation, or state the obvious when their intended line turns out to be too short (i.e. the Cinelume dub has Flora and Musa's reactions to Layla's Enchantix transformation be: "Wow, look at her! She's reached her last fairy form! Yes, this is her highest transformation!").

Exceptions:

    Film 
  • In countries with a long dubbing tradition, such as Spain, Germany, France or the Latin American countries, their translators, script adapters and voice actors are particularly well trained and experienced in dubbing, so they know how to deal with this situations efficiently most of the time. Specially when English is the original language, since most of their imported media come from the US.
  • The Chinese releases of the Kung Fu Panda movies are reanimated to match the Mandarin dialogue.
  • Disney has handled the dubbing of some anime imports, such as the films of Miyazaki. They tend to be meticulous in reworking the dialog to fit the lips and the meaning of the original script, even doing several takes in dubbing to see what works. Getting good voice actors doesn't hurt. Or the fact that the original creator has told Disney in no uncertain terms that gratuitous changes to the movies were to be avoided.
    • Neil Gaiman, who wrote the English script for Princess Mononoke, said in an interview, "People have been asking if we reanimated it. There are two schools of thought coming out from the film. School of Thought #1 is that we reanimated the mouth movements. School #2 is that they must have made two different versions at the same time."
    • In the special features of Howl's Moving Castle, there is a clip of Christian Bale desperately trying to speak his line fast enough to match the animation, then commenting on how it's a lot of words. The script editors change it on the spot.
  • The translator who worked with Sergio Leone on the Dollars Trilogy was given basically free rein to rewrite dialogue to make it fit better with the Italian and Spanish lip movements, to the movies' infinite benefit (consider Leone's abuse of extreme closeups and now consider what might have been). He describes in the DVD special features of The Good, the Bad and the Ugly how he spent a whole day trying to figure out how to translate the band conductor's line "più forte" ("louder"), eventually opting for "more feeling".

    Video Games 
  • Half-Life 2 and other games based on Valve Software's Source Engine have a phoneme editor built in. It takes the written script and the recorded dialogue, and makes a near-perfect set of facial animation instructions for the character. So, changing languages, at least for languages built on the Latin alphabet, is as painless as feeding the game new scripts and new sounds. Looking at Team Fortress 2 voice clips in the Source Filmmaker shows us that the way they accomplish this task is by adding accompanying text to each voice clip, so the models' mouths match the letters and words.
  • Shadow Hearts: Covenant eliminates liplock in cutscenes by having the characters ad-lib or grumble for or in-between certain lines. The effect makes the dub sound much more natural than in most video games. Despite this, many lines still seem to come just before or after the mouths move.
  • Despite most of the Final Fantasy English dubs having to cope with severe cases of Lip Lock as stated above, Final Fantasy XIII completely averts the issue: Square Enix went through the effort of reanimating the lip movements, both in-game and for every cutscene, so as to fit the English dialogue. Whilst re-syncing in-game animations isn't particularly uncommon, doing it for pre-rendered CG cutscenes certainly is. This fortunately results in what can be considered a rather good dub.
    • Another Square Enix game, The Last Remnant, was also given this treatment in its cutscenes. There are exceptions, as some of them use generic animations, which also includes mouth movement and facial expressions. Unfortunately, switching to Japanese voices does not change lip movement, resulting in occasional parts of the dialogue not being properly synced in any language.
    • Even before them Square Enix had already abandoned that trope. There are the already mentioned Kingdom Hearts games for the PS2 (except for the Chain of Memories remake), as well as the PSP game Birth by Sleep. Besides them, Crisis Core, having a cutscene engine reminiscent of Kingdom Hearts', also redid lip moves for the English version.
    • They first encountered this problem with Final Fantasy X. The localization team left the original Japanese lip movements intact, which made it very difficult for the English-speaking actors (Hedy Burress in particular) to read the lines in a cadence that sounded natural (Yuna's labored pauses became a minor meme). Final Fantasy X-2, however, scrapped the original mouth movements completely and re-animated the mouths for the American dub, freeing the actors to finally speak their lines like they weren't choking on their own tongues.
    • Final Fantasy XIV has properly lip-synced dialog in its voiced cutscenes for all four of the game's supported languages (Japanese, English, French and German), which allows the voice actors to speak more naturally and expressively like in Final Fantasy X-2.
  • Wing Commander has a built-in cut-scene lip-syncer that works according to the subtitled script text. It even takes into account and lip-syncs the name that you choose for the protagonist! This coming from a game that didn't even have digital speech.
  • Popful Mail actually reanimated the lip sync for the in-game dialogue sequences for Working Designs' English adaptation on the fly, based on the actual spoken dialogue - though the developers themselves admitted in the manual it occasionally results in a Hong Kong Dub.
  • Trinity Universe and Hyperdimension Neptunia also use separate lip animations for their English- and Japanese-language voice tracks. In the latter game, cutscenes that only have Japanese-language voice tracks (such as the ones for DLC characters Red and 5pb) don't use any lip movements when playing the English-language track.
  • Team Fortress 2: Valve "re-shot" the "Meet the [Class]" videos when creating the other language versions and lip-synced them perfectly down to the last syllable. This is due to the Source Filmmaker allowing the animator to type in accompanying text for each voice recording, so that the mouths match the words accurately.
  • The 3D cutscenes of Catherine and Persona 5 had the lip movements reanimated for their English dubs, though the 2D anime-styled cutscenes were not so easily edited.
  • The briefings in Star Fox: Assault are edited to match the lip flaps to the dubbed voices, though this results in the characters being less expressive than in Japanese.
  • Devil May Cry 4's character facial animations were motion captured from the voice actors themselves while speaking the lines. They redid all motion capturing for all the different languages. Talk about dedication.
  • In a promotional short for the game Ninjala, which details the game’s backstory, the animators went out of their way to reanimate the lip-syncing for both it’s English and Japanese dubs.
  • In the seventh Like a Dragon game, Yakuza: Like a Dragon, the characters' mouth movements are animated completely differently depending on whether the game is set to the English dub or the original Japanese audio, ensuring that they match regardless of audio setting. The same is true for the Gaiden Game Like a Dragon Gaiden: The Man Who Erased His Name.

    Web Original 
  • Ironically enough, Dragon Ball Z Abridged has better lip lock than the anime it is an Affectionate Parody of, as a result of Team Four Star editing in extra Mouth Flaps where necessary.
  • Epic Rap Battles of History has Clint Eastwood mock Bruce Lee over the state of his Hong Kong movies dubbed into English with this line:
    Clint: You're in the the gym too much, Ringo, perfecting your kicks / You should spend more time matching your voice up to your lips
  • Bad Lip Reading makes something of an art form of dubbing over music videos, political broadcasts, NFL games, movie scenes and more in this manner, changing the dialogue to alternate sentences that use similar/the same mouth movements, deliberately forsaking sense the process.

    Western Animation 
  • Like the Thomas example above, The Mr. Men Show is done in America, but the English dub for outside the country is done in the UK. While for the most part it's almost flawless, there are occasional flubs.
    • The British dub of Mr. Grumpy's dialogue matches, despite the cultural translation change. note  Except when Mr. Grumpy growls a remark at the beginning, which doesn't match his mouth.
    • Some of the dialogue from "Pirates" is like this, as Miss Sunshine's mouth at the beginning when she talks to Mr. Nervous, and the mouth flaps for "class" don't match up for "lessons".
    • Averted with Mr. Fussy, when he is Persnickety/Pernickety in the first season, as the two words are pronounced very similarly. This also counts for the character himself, as his mouth is behind a mustache, making it easy for any country to dub it over.

Spoofs:

    Anime 
  • Spoofed to hell and back with the dub version of Ghost Stories. This dub manages to make an impressive feat of spoofing all 4 listed variants of this trope. A good example of the Speed Racer variant is this:
    Leo: (running at the camera in a panic) Oh-my-god-what-the-hell-is-happening-here-these-are-the-fastest-lip-flaps-I've-ever-had-to-sync!!!

    Films — Live-Action 
  • Ignored and spoofed in Kung Pow! Enter the Fist, where the writer/director/main actor went out of his way to write joke lines for the actors to speak so he could dub over them later. For instance: The main character says calmly 'I implore you to reconsider', even though it's very obvious on the screen that he's shouting. A bonus audio track on the DVD reveals that the line being dubbed over was "I'M SOMEBODY'S MOMMY!!" In the segments from Tiger & Crane Fists that feature actual Chinese being dubbed, it follows the Rule of Funny, with grammatically awkward sentences to fit the lip flaps ("People say I do things that are not... correct-to-do."), sentences filled out with Verbal Tics to fit the length ("Wi-ooh"), sentences that don't even come close ( "I don't know."), and dialog that just obviously doesn't fit the mouth movement at all ("THAT'S A LOT OF NUTS!!!").
  • the Japanese scientist in Attack of the Killer Tomatoes! talks this way as a parody of disaster and oster movie tropes.
  • Sorry to Bother You: "the white voice" that various black characters do throughout the movie is done by white actors talking in a nasally, geeky voice, with the actors moving their mouths vaguely mimicing the words.

    Web Original 
  • Chapo Trap House once acted out a bizarre, incoherent Twitter slanging match between frequent Take That! targets The Baseball Crank and Joy Ann Reid "as if it was an anime". Joy Ann Reid's character is played with the Speed Racer style ultra-rapid speech with Accent Upon The Wrong Syllable affectations, and The Baseball Crank does a lot of the kind of gurgly laughing that would be used to pad out extra flaps. Bonus points for getting this effect across in audio-only form.

    Western Animation 

Top