troperville

tools

toys


main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Lip Lock
"If a guy says, "my mother", he's gonna have two closed-mouth sounds, although if you're saying it in Chinese, it's going to be totally different. So, this is the thing you had to get around by substituting various words, and the Ms, Bs, and Ps, the closed-mouth sounds, were always difficult."
Ted Thomas, arguably the founder of dubbed foreign films in Hong Kong.

This is not a Kissing Trope. Just getting that out of the way.

When something gets dubbed into a language it wasn't originally in, that's when the trouble starts for this trope. The actors have to lipsync along with the old footage, which is tricky if they don't want to turn it into a Hong Kong Dub. Contrary to popular belief, the people who need to deal with Lip Lock are not the voice actors (who only act out what is written in the script) but the people who translate/write the dub script (sometimes the translator and the script writer are the same person, sometimes not). Script writers usually read lines out loud while writing, to make sure that they fit the mouth flaps.

If the script writer doesn't pay attention to match the lip flaps, the result is that the new actors are forced to speak at strange tempos in order to better fit the lipflap. The main ways in which this is manifest are:

In anime, the Japanese studios create the animation first, and then record the voices. This means that characters' mouth often just moves up and down, however, the larger the animation budget, the more effort animation studios make to make the lip flaps match the dialogue (Honey and Clover is a good example with mouth flaps that match the lines perfectly). With American cartoons, the voices are recorded first and the animation is built around them. This means the mouths move in a manner much more consistent with the dialogue, at the cost of making it more difficult to translate into another language. This difference can be very clearly seen in the English dub of AKIRA, a Japanese animated movie which, unusually, recorded the voices before the animation and took pains to make the mouth flaps match the dialogue. The result is that the English version looks distinctly off. Ironically, live-action dub scripts are easier to write because the natural movements of the mouth while speaking are a lot more vague than in cartoons.

Due to the nature of dubs, no script writer can avoid the curse of Lip Lock. A skilled writer can make it a lot less noticeable, but can't do away with it entirely.

There are ways of avoiding it, but all have their own disadvantages:

  • Editing the footage so that the Mouth Flaps match the new dialogue. This is expensive in animation so it's very rarely used. It's also never used in live-action dubbing, because it's well-nigh impossible. It's a lot cheaper in video games, where all you need to do is edit the facial animation instructions (and the timing of events in a scene in-case the translation is shorter or longer than the original line), which can be handled by software, and tends to be used more often there. Doing it for in-engine scenes is simple; redoing a pre-rendered cutscene is effectively just like redoing an animated production. It is, however, used in abridged series like Team Four Star's Dragon Ball Z Abridged.
  • The translator/writer getting creative with the translation so that the lines fit the Mouth Flaps better. This is what usually happens, sometimes leading to meaning being lost, or useless fluff being gained. The degree to which it's done varies but most translators try to find a compromise to match the lip flaps but also get the meaning across. (Unless they just don't care.)
    • Note though that this is an unavoidable product of every translation, be it dub or subtitle. The only exceptions are voiceovers, and only to a certain extent.
  • Filming For Easy Dub: when there's no lipflap to mouth to, say the character is talking off-screen or standing with his back to the camera, the script writer's work is a lot easier. Forcing a dialogue off-screen, however, isn't really doable; even if it can be done, it screams of cost-cutting, despite the intention, and 4Kids and 1990s American anime dubs can once again demonstrate that it doesn't always work and why you shouldn't add dialogue in these scenes just because you can.
    • A related, and largely more effective method is when the character has No Mouth, so the translator just needs to match the length of time. (Lord Zedd from Power Rangers is an example of this, and was voiced by a scriptwriter.)
    • Also often used in movie dubs to great efficiency when the camera is focused on something written. Instead of just subtitling it, sometime the line will voiced, in a translated version, by a character off-screen, when it makes sense in context. If used well, you won't even notice the voicing of line wasn't in the original version.
  • Subtitles.

Considering the flak companies tend to get for playing with the original footage, it's best to just avoid it as much as possible by using good translators/script writers to keep the worst of the effects at bay. Of course, this does rely on the companies' ability to find good script writers. This is the hard part.

Examples:

Anime
  • 4Kids' tendency to do this is parodied mercilessly in this Gag Dub of Higurashi no Naku Koro ni, along with the Macekre'd dialogue and premise.
    Keiichi Casey: GET out - my way! I'm - going - to - a - FEE-ey-staaaa!
    Luffy: You and your NAVY...are ruiningCoby'slifelongDREAM!
  • Besides the normal edits to the dialog necessary for timing, the North American dub of Ranma ˝ used a video editing system (WordFit)note  to tweak the mouth-flaps.
  • The 1986 movie dub of Fist of the North Star suffered from this a lot, though not as much as some of the other titles.
    Raoh: See... It's different now... I'm a king... and a king... must demand respect from everyone.
  • The dub of Bobobo-bo Bo-bobo actually engages in some Lampshade Hanging regarding this. In episode 53, Bobobo states "Now I'm going to tell all of ya where we're...going. I just hope by the time we arrive I can speak without weird pauses."
  • As mentioned above, The Ocean Group dub of Dragon Ball Z has quite a few examples, the first being the (in)famous scene where Vegeta, voiced by Brian Drummond, is asked by Nappa what his scouter says about Goku's growing power level, at which point he takes the scouter off and growls "It's over nine thousa-aaaaaand!" before crushing it in his hands. This has since become an internet meme. Another instance of this elongated delivery is when Vegeta has Gohan by the scruff of his neck and says "I'm going to crush you like a grape in the palm of my hand, you understa-aaaaaand?!" in an especially raspy tone.
  • In Gankutsuou, this actually resulted in the somewhat trite "Wait and hope!" of the original The Count of Monte Cristo being rendered into a memorable Catch Phrase uttered at the end of each On the Next Week's Episode teaser: "Bide your time, and hold out hope!"
  • The dub of Death Note had the very Narm-inducing line (which was also both of the original creators' favorite part in the series): L whispering "I-wanted-to-tell-you, I'm L!", translated from a considerably shorter Japanese sentence, had the former line been spoken at a normal pace. Thankfully, Alessendro Juliani made it sound creepily intimate.
    • The original line was "Watashi wa Eru desu", meaning: "I am L". The issue here was if it had been translated straight, there would have been about five extra syllables and mouth flaps left over, so they had to add something to get it to fit.
  • The dubbing of Transformers Energon was notably bad about this; whenever there was an additional syllable needed, the dub had the characters say various things that sound like they were made up on the spot causing a constant stream of "what?", "uh?", etc. (or as tfwiki.net calls it, "The Pain Count"). Transformers Cybertron suffered less from this and had a better dub script overall.
  • Gash Bell suffered from this immensely during the musical numbers, in which the dubbers would insist on having the VAs sing along to the lip flaps at the expense of any sense of harmony and timing.
  • Spider Riders (of which an actual Japanese version may or may not exist) appears to feature this in spades, to the point where it takes several full episodes to get over the fact that most of the characters come off as having serious mental illnesses. Luckily, it seems like the actors (or the sound editors) get better and better as the series rolls on, so gratuitous pauses grow more and more rare. Strangely, some characters seem almost entirely exempt from this throughout the show.
    • "Will you be...the Inner World's savior or...its destruction?" The line is awkward enough with the strange pauses without being so horribly acted.
  • Heroic Age has a rather hilarious example in the third episode when Age says that he likes to paint then enunciates it, so the dub has to act like "paint" has three syllables ("pa-ain-tu").
  • Digimon Adventure usually uses only one or two voice actors to say something in crowd shots; the rest of the scene is completely silent. In the Saban dub, on the other hand, they usually got a handful of voice actors for those scenes, which resulted in the crowd scenes sounding more natural, although harder to hear the "important" facts.
  • The infamous Mega Man's death scene from Mega Man NT Warrior. As dying, Rockman originally had, as final words, "Ne...tto...kun...". As far as adapting goes, this was a tricky line for the dubbers. First, because Netto's English name is the monosyllabic "Lan", and second, because the lips were carefully animated in said scene. The dub opted for the "De...le...ted...", which just seems random and loses most of the emotion.
  • Pokémon got it easy. Since Pokemon only say their names or unintelligible noises (generally the former in the dub), all they (the dubbers) had to do was play with how they (the Mons) used the syllables of their names. Of course, some of them have the same name (Pikachu being the best-known example), which means no dubbing of their lines has to be done at all.
    • Notably, the 4Kids dub didn't make much of an attempt to make the lips match, which was even lampshaded in a dub-only scene of one episode. There WERE a few renamings here and there (Lorelei in the games became Prima in the dub) which were mostly done to avoid making a couple lines too much longer than the original.
    • A particularly Narm-ful example shows up early in the series, when Ash's response to all the Joys looking alike is "Yeah, it's a Joy-ful... world" but the pause, combined with the actress' unenthusiastic delivery, makes it sound like he's doing a Who Writes This Crap?! in his head.
  • Speaking of 4Kids, Yu-Gi-Oh! sometimes had a hard time with one of its renamings: Jounouchi to Joey. Usually it wasn't a problem, since the dub was more of a rewrite than a translation, but whenever Yugi said "Jounouchi-kun" isolated (which happened a lot), 4Kids had to think of new, inventive ways of filling the flaps. Sometimes it worked, sometimes it didn't ("Be careful, Joey!").
  • 4Kids's Sonic X dub had a little problem with the term "Chaos Control" — in Japanese, its pronunciation is slightly longer (Kaosu Kontorōru), so they always needed to add something to it. When characters used it, it became "Chaos Control, now!" — a little awkward, but works fine. When it was mentioned in conversation, however, things could get a little weirder (check Sonic X's Dub Induced Plothole entry for more information).
  • Axis Powers Hetalia has some problems with this. Specifically the very first scene in the very first episode, which consists of Loads and Loads of Characters all trying to have their Establishing Character Moment at the same time. No, they're literally all talking at the same time. Needless to say, Hilarity Ensues. Can you keep up?
  • Hungarian voice actress Melinda Major, who had voiced a character in Nana, cited this as her reason for quitting anime dubs. Trying to emote to anime mouth movements (or in her words, "triangles that sometimes move") was simply way too frustrating for her. She loves dubbing western animation, though.
  • In the English dub of Children Who Chase Lost Voices, "kisu," the Japanese-ised version of "kiss," is replaced with... itself. Asuna's English actress says "kisu" because the "oo" part of her mouth movements was obvious enough that "kiss" would have looked weird.
  • Although Disney's dubs of Studio Ghibli films tend to be quite good about this (see the entry under "Exceptions" below), it was quite noticeable in Castle in the Sky that James Van Der Beek was not experienced at anime dubbing. His character came out sounding a little like the intentionally clipped, rapid speech of the Canadians in South Park.
  • Cult classic AKIRA has had its fair share of this in both dubs, due to the characters having their lip movements animated specifically for the original Japanese audio. Most notable is the scene where Kaneda and Tetsuo are about to fight, in which the 1989 dub has Kaneda say in a rushed tone "OkayletssettlethisonceandforALL!" after Tetsuo yells his name. The 2001 dub changed this to "That's Mr. Kaneda to you, punk!" which fit the context and the mouth movements better and sounded more natural.

Film
  • Ignored and spoofed in Kung Pow!: Enter The Fist, where the writer/director/main actor went out of his way to write joke lines for the actors to speak so he could dub over them later. For instance: The main character says calmly 'I implore you to reconsider', even though it's very obvious on the screen that he's shouting.
    • A bonus audio track on the DVD reveals that the line being dubbed over was "I'M SOMEBODY'S MOMMY!!"
    • In the segments of actual foreign language being dubbed, it follows the Rule of Funny, with grammatically awkward sentences to fit lip flap ("People say I do things that are not... correct-to-do."), sentences filled out with Verbal Tics to fit the length ("Wi-ooh"), sentences that don't even come close (<several seconds of silent lip flap> "I don't know."), and dialog that just obviously doesn't fit the mouth movement at all ("THAT'S A LOT OF NUTS!!!").
  • The English dub of Godzilla Raids Again (as Gigantis the Fire Monster ) went to extreme lengths to make the English dialogue match the mouth movements of the Japanese actors, which has the unfortunate side effect of making the actual content of the dialogue almost incomprehensible.
    • An example of which was a Japanese word translating to 'stupid fool'. As the lips still had to be in sync, it was replaced by 'banana oil', which makes for a very nonsensical insult.
      • "Banana oil" was a slang back in The Roaring Twenties, roughly equivalent to "that's baloney" or in other words, "bullshit." Makes sense in the context of the scene, though not really the setting.
    • And Mothra vs. Godzilla involves the great line "Yeswealwayskeepourpromises." Apparently, the equivalent Japanese word is really really short.
  • Can be noticed a few times in the otherwise excellent English dub of Sky Blue, where Korean sounds don't match well with their English equivalents.
  • A rare example in an English film, Pazuzu speaking through Regan in The Exorcist was done by Mercedes McCambridge in post-production rather than Regan's actress Linda Blair, due to the former having a deeper, androgynous, and more demonic voice. As such, there are moments when the voice doesn't match up with Blair's lip movements.
  • Final Fantasy VII: Advent Children has this because it's all CGI with accurate mouth movements for Japanese. We quote: "Dilly dally shilly shally."
  • Italian productions are often filmed in English, with American or British lead actors. Sometimes, the supporting actors deliver their lines in Italian, with the English dubbed in later. This is much less noticeable than if all of the dialogue is dubbed, but it can lead to awkward situations such as in Suspiria, which includes a scene with an American, an Italian, and a German, each speaking in their own native language.
    • Barbara Steele complained that the production company's policy of dubbing all the voices meant that her own dialogue was dubbed over in Mario Bava's Black Sunday, even though she was speaking English.
  • The 1989 version of Cinderella, Aschenputtel, if you could not tell by the name was German, and when dubbed into English, did not match the mouth movements at all.

Video Games
  • Calling is an extremely chronic offender. At the end of the opening cutscene when Rin answers her phone, she clearly mouths "Moshi moshi,", but in the dub, it was changed to a drawn out "Hellloooo?" Half of what the characters say don't even sync up with their mouth flaps.
  • Final Fantasy X occasionally suffered from this because of the fact that a fixed, 'rhubarb-rhubarb' mouthflap loop was being used. Ignoring serendipitous cheats like Auron's face-obscuring collar letting all his lines sound natural and smooth, Yuna, in particular, was injured badly - her voice was already soft and shy. Thankfully, the sequel smartened up the lip sync.
    • But the sequel only re-synced the lip flaps on certain shots of certain scenes. 90% of the time, you're watching the "rhubarb-rhubarb" flaps (and more worryingly is the glazed expression on their faces while doing it). However, Yuna's voice actress still spends less time trying to fit the voice flaps exactly.
    • FFX actually uses technology to speed up the voice clips if they're a little too long. Most of the time this isn't really noticeable, but if I quote the words "WithYunabymyside" I'm sure someone will recognize it.
    • It's also blatantly obvious when in the original Japanese script a character - usually Yuna - just said "hai", because it's usually replaced with a quick/mangled "okay" (or, at least once for Tidus, "oi"). This is presumably because the "s" at the end of "yes" is a fricative, but makes for some awkward scenes.
  • Since Dirge of Cerberus uses the same cutscene engine from Final Fantasy X, this was inevitable. There are many examples, but a particularly Narm-ish one comes to mind - when in the Japanese version Vincent, witnessing Azul's demon form, says "Nanda", in the English version he gives us "Whatthehell?"
  • One pivotal scene in Final Fantasy XII is rather derailed by the ringing declaration that a certain individual "Is not thetypetotakeBASEREVENGE!"
  • The game Yakuza suffers from this really badly with towards the end, possibly as they ran out of budget. Without warning characters will suddenly start using every trick in the book, enthusing random syllables, pausing in the middle of lines and speeding and slowing their speech at random. It?s even more painful as the game has a high quality voice cast, rendered unable to act by insanely strict lip lock.
  • The Codec dialogues in the original Metal Gear Solid were surprisingly well-synced to whoever was talking. The remake, The Twin Snakes, however, suffered from a lazy fix of making the character's mouth move based on how many letters were displayed on the screen, paying no attention to pauses. Most egregiously, Visible Silence caused the characters' mouths to jabber meaninglessly while they said nothing.
  • (Almost all) Early Playstation games with voiced cutscenes suffered from this. With No Budget or space to modify the scenes, and no budget to hire experienced voice actors, Lip Lock was either ignored (leading to a Hong Kong Dub) or the delivery was completely ruined. Examples include Zero's "WhatamIfightingFOOOOOOOOOOOOR!" in Mega Man X4 and Xenogears (all of it.)
    • To give you some idea of how bad this was, the bar was set so low that the above-mentioned X4 was generally considered to have unusually good voice acting for a video game when it came out.
  • This otherwise surprisingly good Kingdom Hearts Fan Dub suffers from this at a few places.
    Ven: We're friends. Therefore, I wanted to ask you...something.
  • Jeanned Arc contains quite a few anime sequences, but nearly all the text sounds ridiculously rushed. Even worse than usual because most characters speak in various strengths of French accents.
  • In the Ghostbusters game for the PS3, none of the characters' mouths ever synced with what they were saying. Considering that they managed to get almost the entire cast of the movies involved in the voice acting, it's really disappointing that the animators couldn't have done a better job.
  • Dissidia: Final Fantasy suffers heavily from this. Every other cutscene have the characters talking with random punctuations ("I mustn't ruin. Everybody's hopes.") making some of the dialogue sound uncomfortably awkward.
    • They did better with certain characters: Garland, Golbez, Exdeath, Gabranth, and Dark Knight Cecil all have closed helms, making it easier to make good-sounding sentences, due to the lack of lips to sync.
      • Though there's still the scene with Dark Knight Cecil saying, "I must. (Minute-long pause.) Do this."
      • Cloud gets hit with this particularly badly. It ends up with him stopping mid sentence several times with very obvious pauses, and practically everything he says is a variant on "I just... (rest of sentence)." Considering Advent Children had much more complex lip sync, it's surprising how bad Steve Burton sounds at some points in comparison.
      • And compare everyone's dialogue in the cutscenes to the pre- and post- battle lines, where there is no lip sync to deal with.
      • This can also a problem with some scenes in the Japanese dub (the scene with Cosmos and Warrior of Light in Ultimecia's Castle is a good example) due to how the mouths are animated.
  • Kingdom Hearts went from being an example at the top of this page in the two main installments to a lip-locked mess in Re:Chain of Memories and 358/2 Days.
    • Both were the result of having pre-rendered cutscenes instead of the usual game-rendered ones; redoing the lip-syncing for the former is more expensive. And yet, they did it in some rare instances of 358/2 Days anyway: Namely, when DiZ says "She?" (referring to Xion), which in Japanese had the three-syllables lip moves of "Kanojo?"
    • For Kingdom Hearts II they didn't bother to change the lip movements for the flashbacks to the first game.
  • The English dub of Sakura Wars: So Long, My Love makes no attempt to match the voices to the Mouth Flaps outside of animated cutscenes. This makes the dialogue more natural at the expense of agreement between the visuals and spoken dialogue.
  • Digital Devil Saga. Even though the English voice actors are very experienced in voicing foreign animation, the dialogue has as much awkward pauses and speed variations as other examples in this page; the care put into the lip matching with the Japanese dialogue certainly doesn't help. It's less noticeable when the characters are in their demon forms, but the rest of the time.... yeah.
  • Onimusha, oh so, so much. Especially in Dawn of Dreams.
  • Infinite Undiscovery suffered from this trope hard. There are multiple scenes where characters are talking to each other, yet their lips rarely, in some cases never, move at all, giving the impression that everyone's either speaking telepathically or skilled ventriloquists.
  • Starcraft and Warcraft III had "cinematics" (non pre-rendered in-game cutscenes) that suffered from this. While the in-game models you were looking at had no speech animations, the "portraits" of the characters displayed at the bottom of the screen did. Which is to say, they had one speaking animation each. You'd see the same generic lip movements from the characters if they were delivering highly emotional dialog, or if they were making funny pop culture references because you clicked them too many times.
    • That's due to engine limitations. The talking animations are all prerecorded; the animation and the audio playback are activated by separate triggers.
    • Surprisingly (because, despite the entry above, this wasn't generally considered a big deal), this was changed in Starcraft II to a real-time lip-syncing system (similar to the ones found in most modern RPGs). Even the protoss (who lack lips, and indeed mouths in general) have unique animations for each of their lines; it would not be an exaggeration to say each unit portrait in Starcraft II has more animation and attention to detail than a unit model in Warcraft III.
  • Star Fox 64. Perhaps it was due to hardware limitations or Nintendo just being Nintendo, but the egregious mouth flapping invoked of the communicating characters didn't come remotely close to syncing with either the Japanese or English languages.
  • Sly Cooper, being one of the very rare instances where the series is fully translated into loads and loads of other languages, would inevitably run into this problem. Just play any of the games in any non-English mode and you're guaranteed to find at least one or two sentences that last 10 seconds shorter or longer than the original dialogue.
  • Sonic the Hedgehog (2006) had major failure with the lip syncing. While the previous 3D Sonic games had their own issues with lip syncing, Sonic '06 made it apparent that Sega did not give a damn about quality control, among other things. Characters that spoke either talked while their mouths didn't move at all or had their mouths move before they even started to speak.

Western Animation
  • For varying reasons (dialogue differences, faces that didn't re-draw well without a full reanimation, etc.) Puppy in My Pocket: Adventures in Pocketville has absolutely horrific syncing. Characters often express full sentences with barely any facial movement. The show's villainous kitten is a particular noteworthy example, as she can apparently deliver virtually all of her lines with a clenched jaw.
  • The earliest Popeye cartoons at times appeared to be animated before voiced (the opposite of what most cartoons do, and what the series itself would do later). As a result, the actor would occasionally speak with strange phrases or phrase said phrases in strange ways to fit the lip animations. More often, however, they ignored this completely: leading to the famous practice of having the characters continue to talk (usually with aside comments or under their breath) when their lips were obviously not moving.

Exceptions:

Film
  • In countries with a long dubbing tradition, such as Spain, Germany, France or the Latin American countries, their translators, script adapters and voice actors are particulary well trained and experienced in dubbing, so they know how to deal with this situations efficiently most of the time. Specially when English is the original language, since most of imported media come from the US.
  • Disney has handled the dubbing of some anime imports, such as the films of Miyazaki. They tend to be meticulous in reworking the dialog to fit the lips and the meaning of the original script, even doing several takes in dubbing to see what works. Getting good voice actors doesn't hurt. Or the fact that the original creator has told Disney in no uncertain terms that gratuitous changes to the movies were to be avoided.
    • Neil Gaiman, who wrote the English script for Princess Mononoke, said in an interview, "People have been asking if we reanimated it. There are two schools of thought coming out from the film. School of Thought #1 is that we reanimated the mouth movements. School #2 is that they must have made two different versions at the same time."
    • In the special features of Howl's Moving Castle, there is a clip of Christian Bale desperately trying to speak his line fast enough to match the animation, then commenting on how it's a lot of words. The script editors change it on the spot.
  • The translator who worked with Sergio Leone on the Dollars Trilogy was given basically free rein to rewrite dialogue to make it fit better with the Italian and Spanish lip movements, to the movies' infinite benefit (consider Leone's abuse of extreme closeups and now consider what might have been). He describes in the DVD special features of The Good, the Bad and the Ugly how he spent a whole day trying to figure out how to translate the line "piů forte" ("louder"), eventually opting for "more feeling".

Video Games
  • Half-Life 2 and other games based on Valve Software's Source Engine have a phoneme editor built in. It takes the written script and the recorded dialogue, and makes a near-perfect set of facial animation instructions for the character. So, changing languages, at least for languages built on the Latin alphabet, is as painless as feeding the game new scripts and new sounds. Looking at Team Fortress 2 voice clips in the Source Filmmaker shows us that the way they accomplish this task is by adding accompanying text to each voice clip, so the models' mouths match the letters and words.
  • Shadow Hearts II eliminates liplock in cutscenes by having the characters ad-lib or grumble for or in-between certain lines. The effect makes the dub sound much more natural than in most video games.
    • And yet many lines still seem to come just before or after the mouths move.
  • Despite most of the Final Fantasy English dubs having to cope with severe cases of Lip Lock as stated above, Final Fantasy XIII completely averts the issue: Square Enix went through the effort of reanimating the lip movements, both in-game and for every cutscene, so as to fit the English dialogue. Whilst re-syncing in-game animations isn't particularly uncommon, doing it for pre-rendered CG cutscenes certainly is. This fortunately results in what can be considered a rather good dub.
    • Another Square Enix game, The Last Remnant, was also given this treatment in its cutscenes. There were exceptions, as some of them were using generic animations, which also included mouth movement and facial expressions. Unfortunately, switching to Japanese voices does not change lip movement, resulting in occasional parts of the dialogue not being properly synced.
    • Even before them Square-Enix had already abandoned that trope. There are the already mentioned Kingdom Hearts games for the PS2 (except for Chain of Memories remake), as well as the PSP game Birth by Sleep. Besides them, Crisis Core, having a cutscene engine reminiscent of Kingdom Hearts', also redid lip moves for the English version.
    • They first encountered this problem with Final Fantasy X. The localization team left the original Japanese lip movements intact, which made it very difficult for the English-speaking actors (Hedy Burress in particular) to read the lines in a cadence that sounded natural (Yuna's labored pauses became a minor meme). Final Fantasy X-2 however, scrapped the original mouth movements completely and re-animated the mouths for the American dub, freeing the actors to finally speak their lines like they weren't choking on their own tongues.
  • Old Wing Commander had a built-in cut-scene lip-syncer that worked according to the subtitled script text. It even took into account and lip-synced the name that you chose for the protagonist - see it for yourself the next time you try out the original! This coming from a game that didn't even have digital speech.
  • Similar to the Wing Commander example above, Working Designs' localization of Popful Mail actually animated the lip sync for the in-game dialogue sequences based on the actual spoken dialogue - though the developers themselves admitted in the manual it occasionally resulted in a Hong Kong Dub.
  • Trinity Universe and Hyperdimension Neptunia also use separate lip animations for their English- and Japanese-language voice tracks. In the latter game, cutscenes that only have Japanese-language voice tracks (such as the ones for DLC characters Red and 5pb) don't use any lip movements when playing the English-language track.
  • Team Fortress 2: Valve "re-shot" the "Meet the [Class]" videos when creating the other language versions and lip-synced them perfectly down to the last syllable. This is due to the Source Filmmaker allowing the animator to type in accompanying text for each voice recording, so that the mouths match the words accurately.
  • Catherine likewise averts this by having its animation edited to match the dub for its English release.
  • Devil May Cry 4's character facial animations are motion captured from the voice actors themselves while speaking the lines. They redid all motion capturing for all the different languages. Talk about dedication.

Web Original
Mouth FlapsAnimation TropesRotoscoping
Limp and LividAdded Alliterative AppealLipstick Lesbian
In The Local TongueTranslation TropesLost in Translation

random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
65804
0