Any way of "automatic" lip syncing?

Hello fellow Unity users

Me and my brother is in the process of making a game, that will contain a LOT of recorded dialogue. So we want to be able to lip sync, without having to do every single line by hand.

We don’t care if it’s top of the line lip syncing, it just has to move the characters mouth when it’s talking, and stop when finished. Think like the old playstation era, where there were no lip, only mouth movement.

Are there any addon for this, maybe in the asset store, or where can I at least find it? Or are there an easy way to to it ourselves?

And do mind that we have limited money, so we possibly won’t have hundreds of dollars to spare.


Are there at least a way where a character can open his mouth more/less depending on the volume of the sounds coming out? I could imagine this is a kinda easy script to make?

This script uses audio.GetSpectrumData to analyze the audio data and to calculate the instantaneous volume of a given range of frequencies. In order to use GetSpectrumData, we must supply a power-of-two sized float array as the first argument, which the function fills with the spectrum of the sound currently playing. Each element in this array contains the instantaneous volume (0…1) of its corresponding frequency, calculated as N * 24000Hz / arraySize where N is the element index.

The function BandVol(fLow, fHigh) below calculates the averaged volume of all frequencies between fLow and fHigh. In this case, where voice sounds must be analyzed, we can set the range to 200Hz - 800Hz - it will produce good results, although other ranges can be tested as well (voice sounds range from 150Hz to 3KHz). If bass sounds were to be used, for instance, we should use a lower range like 50Hz to 250Hz.

In order to test it, I used a simple object (defined in mouth) which have this Y position elevated proportionally to the output of BandVol. A variable called volume is used to set how much the mouth raises. You can change this and use the value returned by BandVol to control the mouth vertical scale, for instance.

This script must be added to the object which contains the Audio Source, and another object must be defined in the mouth variable. It plays the audio clip defined in Audio Source and moves the mouth up and down following the sound played. In order to reproduce several different sounds, you can use PlayOneShot(audioClip) instead of Play().

EDITED: PlayOneShot doesn’t affect GetSpectrumData, like @FutureRobot observed in his answer below. In order to play different sounds, declare an AudioClip array and populate it with the clips in the Inspector. To play one of these clips, assign it to audio.clip and use the old and good Play() (array and function PlaySoundN included below):

var sounds: AudioClip[]; // set the array size and the sounds in the Inspector    
private var freqData: float[];
private var nSamples: int = 256;
private var fMax = 24000;
private var audio: AudioSource; // AudioSource attached to this object

function BandVol(fLow:float, fHigh:float): float {

	fLow = Mathf.Clamp(fLow, 20, fMax); // limit low...
	fHigh = Mathf.Clamp(fHigh, fLow, fMax); // and high frequencies
	// get spectrum: freqData[n] = vol of frequency n * fMax / nSamples
	audio.GetSpectrumData(freqData, 0, FFTWindow.BlackmanHarris); 
	var n1: int = Mathf.Floor(fLow * nSamples / fMax);
	var n2: int = Mathf.Floor(fHigh * nSamples / fMax);
	var sum: float = 0;
	// average the volumes of frequencies fLow to fHigh
	for (var i=n1; i<=n2; i++){
		sum += freqData*;*
  • }*
  • return sum / (n2 - n1 + 1);*
    }

var mouth: GameObject;
var volume = 40;
var frqLow = 200;
var frqHigh = 800;
private var y0: float;

function Start() {

audio = GetComponent.(); // get AudioSource component

  • y0 = mouth.transform.position.y;*
  • freqData = new float[nSamples];*
  • audio.Play();*
    }

function Update() {

_ mouth.transform.position.y = y0 + BandVol(frqLow,frqHigh) * volume;_
}

// A function to play sound N:
function PlaySoundN(N: int){

audio.clip = sounds[N];
audio.Play();
}

Great stuff aldonaletto, I was looking for exactly this! However the BandVol function doesn’t seem to respond to audio played with the PlayOneShot, only clips directly assigned to the audio component. Any idea how to get around that?

In the spirit of sharing, I managed to get ok looking results by using the value returned by your script and hooking them up to an additive animation.

I basically made a mouth animation which starts closed, opens with an O-like shape, then goes on to a wider shout style pose. This animation is enabled from the start with a maximum weight and zero playback speed. Since it’s additive it won’t affect any objects at frame 0. I then used your BandVol function to control the normalized time of the additive animation. Since the additive animation has nonlinear movement and some variation in it, it gave a more organic result than if I were to rotate the jaw or maybe fade a pose in and out by controlling its weight.

I also used a cutoff value making the character close his mouth at low values. Encouraging a more “talky” motion as opposed to a half-open vibrating pose that can happen at lower volumes. And finally a Lerp so I could tweak how smooth the mouth movements should be. In the end it worked well for my cartoony flappy-mouth character.

The extra variables used:

private float mouthCurrentPose;
private float mouthTargetPose;
public float voiceVolumeCutoff;
public float mouthBlendSpeed;

The setup of the additive animation from Start()

animation["anim_talk"].layer = 5;
animation["anim_talk"].blendMode = AnimationBlendMode.Additive;
animation["anim_talk"].speed = 0.0f;
animation["anim_talk"].weight = 1.0f;
animation["anim_talk"].enabled = true;
animation["anim_talk"].wrapMode = WrapMode.ClampForever;

and the function running the mouthpose

void LipSynch()
{
	mouthTargetPose = BandVol(frqLow,frqHigh)* volume;

// Tweak the voiceVolumeCutoff to get a good result, I used 0.1f myself
	if(mouthTargetPose<voiceVolumeCutoff)
		mouthTargetPose = 0.0f;
	
	mouthCurrentPose = Mathf.Lerp(mouthCurrentPose,mouthTargetPose,Time.deltaTime*mouthBlendSpeed);

// I didn't bother with clamping the result since the addditive animation clamps itself.
// Tweak the volume value to get results between 0.0 and 1.0 from your voice samples.
    animation["anim_talk"].normalizedTime = mouthCurrentPose;
}

You can get better results if you tweak the volume and voiceVolumeCutoff to match each voiceclip.

Here’s what I’ve found:

  1. Forum topic (discusses some alternatives)
  2. Script on the wiki (if you model the individual sounds as separate meshes, it looks like this allows you to smoothly transition from one state to the other)

If you don’t particularly care about the accuracy of the animations, just rig your characters’ mouths and play an animation any time they’re supposed to speak.

Alternatively, bones are just transforms so you should be able to reference them like any other transform. Just have your script adjust the magnitude of the movement based on the audio clip’s volume. See Al’s note below.

if I were doing lipsyncs, I would probably rig up some standard phoneme mouth shapes/blends (M-E-O-W), and write some sort of custom blending system that would interpret a magpie (or similar) script and blend to the appropriate mouth shape.

magpie, and other software like it, will take recorded dialogue and attempt to generate a ‘timing script’ based on phonemes it detects. My experience with it usually needed very little cleanup work- after that you have a pretty usable ‘script’ from which you can get all the timing info about your dialogue. From there you could easily write something that parsed the lipsync timing, and if a particular phoneme is detected, just blend to that shape.

This is all just theory- I haven’t implemented it before, but if I were that’s probably how I would do it if accuracy were any concern.

Hi,

I’m trying the here mentioned answers directly for the microphone input instead of a loaded audioclip and it doesn’t work :frowning: Is there any reason for that?

I’m using these functions (adapted from the ones mentioned here and in another post):

private float GetVolume()
{
	if(audio==null)
		return 0;
    float[] data = new float[samples];
    audio.GetOutputData(data, 0);
	
	//take the median of the recorded samples
    ArrayList s = new ArrayList();
    foreach (float f in data)
    {
        s.Add(Mathf.Abs(f));
    }
    s.Sort();
    return (float)s[samples / 2];
}

float fMax = 24000;
private float HumanFreq(float fLow, float fHigh)
{
	if(audio==null)
		return 0;
    float[] data = new float[samples];
	fLow = Mathf.Clamp(fLow, 20, fMax); // limit low...
	fHigh = Mathf.Clamp(fHigh, fLow, fMax); // and high frequencies
	// get spectrum: freqData[n] = vol of frequency n * fMax / nSamples
	audio.GetSpectrumData(data, 0, FFTWindow.BlackmanHarris); 
	int n1 = (int)Mathf.Floor(fLow * samples / fMax);
	int n2 = (int)Mathf.Floor(fHigh * samples / fMax);
	float sum = 0;
	// average the volumes of frequencies fLow to fHigh
	for (var i=n1; i<=n2; i++){
		sum += data*;*
  • }*
  • return sum / (n2 - n1 + 1);*
  • }*
    GetVolume virtually always returns 0 (with random exceptions), and HumanFreq does not filter wheather it’s a human voice or just noise. Am I missing something? I tried changing samples value, but with no effect.

Hi,
never tried with a mic…
did you check if you get something in data, from the audio.GetOutputData(data, 0) ?

new tutorial for lipsync and Mecanim in unity: www.noorvfx.com/2014/03/lipsync-macanim-animation-in-unity3d/

new tutorial for lipsync and Mecanim in unity: www.noorvfx.com/2014/03/lipsync-macanim-animation-in-unity3d/

Thank you. Really helpful, I used it + Blend trees and I really liked the result:

I really appreciate that you took the time to answer and even improve the answer, thank you again.
Voice: Garen from League of Legends
Character: Made by my team, the character is Simon Bolivar