Character Rigging: How to Set up Face Animation Controls in Blender
This tutorial describes some techniques for adding facial animation controls to an existing character model in Blender.
This tutorial assumes you have a character model, and have created an armature skeleton for it.
I’m going to present some techniques for adding several things to your animated character:
- eye movement
- blinking (either with Shape Keys, or with deformation bones)
- a mouth that opens
- eyebrow expression
- posable lips
This is an in-depth, step-by-step tutorial, with explanations, and navigation-centered descriptions. For a more high-level summary of the techniques presented in this tutorial, please check out the companion document here: character-rigging-faces-in-blender
This is a tutorial for Blender 2.9, but most of the steps of the tutorial will still apply for older versions. The UI for Blender 2.79 and below will look different from the screenshots I provide. I will note when a feature is new, and not available in Blender 2.79 or older.
I resisted switching from Blender 2.79 for a while, because I experienced low frame rates in the viewport, while the older version of Blender ran smoothly. I use an older Mac laptop with a retina display for game development. I eventually resolved this performance problem by enabling “Open in Low Resolution” for the Blender app.
Blender can be downloaded here: https://www.blender.org/download/
This tutorial also assumes the character is going to be used in a game made with Unity. Some of the steps will mention Unity, and the tutorial includes some steps for setup there. Most of the tutorial will still be useful if your destination is another game engine, or a use other than games.
My last tutorial ended with a capsule character that could be controlled in Unity. This one is starting with the assumption that you have a completed character. Which is a real “Draw the rest of the owl” energy, and I’m sorry. I want to make another tutorial about creating the character model and animation soon.
The character model this tutorial starts with is representative of the result of my character modelling process, and hopefully has enough similarity to those made in popular processes that the early steps will make sense.
The model is:
- Fairly low resolution (it’s 6000 faces, but a big difference shouldn’t matter much).
- It has an armature skeleton, with a neck followed by a single head bone.
- The mesh object has been attached to the skeleton and the weights are painted so the deformation is convincing (improved from automatic weights, so that for example, the head is more completely weighted to the head bone and bending happens at the neck; and not much mass is lost at the hips and waist when the legs bend).
Some more details that shouldn’t impact the steps of the tutorial if your model doesn’t match:
- IK bones and constraints have been added for ease of animation of the arms and legs.
- The hands have finger bones.
- The armature has a root bone at its base.
- The model has been UV unwrapped, and has a painted, diffuse colour texture.
There are a couple things I will recommend setting up in Blender quickly before we start.
The command search function is incredibly useful, and I believe it to be the key feature in the change in Blender’s reputation from cryptic to easy-to-learn. In Blender 2.79, the command search was accessed by pressing the spacebar by default. The default in 2.8 and above maps the command search to the F3 key. This isn’t so much harder to access, but I hope they move it back to spacebar as the default, and I think you should put it there yourself until they do. This can be set under Preferences, in the Keymap section, and then in the Preferences section at the top of that, change “Spacebar Action” to “Search”.
The 3D Viewport camera has a near clipping plane and a far clipping plane, and those have default values that try to cover general use cases for Blender. I find that the default near clipping plane distance is too far for character modelling (because the character takes up a generally small space, and I often want to work on small details). You can change this property in the right panel of the 3D Viewport (press the N key, or click the tiny arrow on the right side of the view to open the right panel). In the View tab, change “Clip Start” to 0.01.
I also recommend reducing the viewport camera’s field of view a bit. Working on small details, I find the default perspective feels too exaggerated. Bring the “Focal Length” setting in the same place up to 90. This will have the effect of zooming in the camera a bit.
If you use a laptop, you might already have the “Emulate Numpad” setting on. This maps the numeric keys across the top of the keyboard to the Blender functions that are intended to be accessed with the numpad keys, such as quickly switching to front, side, and overhead views in the viewport. You can enable that in the Input category of the Preferences window. Using a laptop with Blender 2.8 and newer, I also found I needed to bind alternate shortcuts for the geometry select settings when in mesh edit mode. In Blender 2.79, Ctrl-Tab brought up a selection menu, and I could press 1, 2, or 3 to choose vertex, edge, or face. Ctrl-Tab now opens a mode selection menu. And there didn’t seem to be an alternate shortcut for those using emulate numpad. I bound these to Ctrl-1, Ctrl-2, and Ctrl-3. These functions are listed in the Keymap section of the Preferences window under “mesh.select_mode”, with “Vertex”, “Edge”, or “Face” set as the “Type” option.
Weight painting is a very visually difficult task for me. One thing that makes it difficult is not being able to easily see the difference between a weight of zero, and a weight of nearly zero. A nearly zero weight can easily be seen in motion when a part of the body is slightly attached to a bone somewhere else on the body. You should definitely enable the “Zero Weights” visualization to make those problems easy to catch. With a model selected in Weight Paint mode, open the Show Overlays dropdown popup menu. And find the Weight Paint section at the bottom. Set “Zero Weights” to “Active” to display geometry that is not weighted to the selected bone as black, instead of blue.
Another setting in the same place is “Show Weight Contours”, which is new in Blender 2.8. This will be a little more noisy to look at, but it adds stripes to the color gradient of the weight paint display. The stripes make it much easier to see and get a sense for the changes in weight value on the model. This setting can allow you to keep some sort of shading in the display, or show a bit of the texture color without limiting your ability to see the weights you are working on too much.
Eye Movement
In this section we will:
- make the character’s eyes separate
- create bones for the eyes
- create bones to be targets for the eyes to look at
- make a parent for the look targets
Make your character’s eyes spheres that are separate from the rest of the model. The eyes should be separate objects from each other as well. It’s fairly easy to update the other eye using a mirror modifier if you make changes to one. Or you can make a linked duplicate with Alt-D to have them share the same mesh and texture UVs.
Place bones starting from the centers of the eyes and and with the tail of the bones pointing forward. If you select a sphere and use the command “Snap Cursor to Selected”, you can position the 3D cursor to create the bone in the right place. This command is also in the Snap menu, which you can pop open with Shift-S.
Name the bones symmetrically. You can create one bone and name it something ending with the appropriate “_L” or “_R” and then select it and use the “Symmetrize” command with that bone selected to create the opposite one.
Put the armature in pose mode (in pose mode, selected bones show blue instead of yellow) and select one eye bone. Then select the eye mesh object, and shift select the bone again. Use the “Parent” command, and select “Bone” from the menu options. When we eventually get the asset into Unity, this will result in the eye being attached to this bone as a static mesh instead of a skinned mesh.
Do the same with the other eye bone and the other eye.
Now if you rotate the eye bones in pose mode, the eyes rotate with them. Set the pivot mode to “Individual Origins”, and you can rotate both eyes together.
Select both eye bones in pose mode and press R, R to map your planar mouse movement to a track-ball-style rotation of the selected objects. You can reset the rotation again with Alt-R.
Let’s set the parent of the eye bones to be the head. Switch back to edit mode with the armature selected. Select the bones and go to the Bone tab in the Properties panel. Select the head bone by name in the list that shows when you click on the field labeled “Parent”. If you hold Alt when you click to select from the list, all selected bones will have their parent set, otherwise, only the parent of the active bone is set.
Now we are going to make a look target for the eyes.
First let’s check in the tool panel to see that X-Axis Mirror is on. Press N to toggle the right panel open if it is closed, then select the Tool tab. X-Axis Mirror is under “Options” in that tab, when an armature is selected and in edit mode.
In Blender 2.79 or lower, this option is in the left panel, in the Tools tab (press T to toggle the left panel).
In Blender 2.8, a second X-Axis Mirror setting was introduced for pose mode, which makes moving and rotating bones in pose mode symmetrical. The new setting is also in the Tool tab, but under “Pose Options”, when an armature is selected in pose mode.
Select one of the eye bones in edit mode and press Shift-D to duplicate, then press Y to lock to the Y-axis, and then press 2 to specify 2 units along this axis. If this puts the new bones behind your character, press — (minus sign) to toggle the direction on that axis. Press Enter, or Left-click to complete the duplicate action.
If the X-Axis Mirror setting is on, and your eyes were named symmetrically (having the same name but with corresponding “_L” and “_R” at the end), this duplicate action will have created two bones.
Let’s rename the new bones to something like “EyeTarget_L” (and “EyeTarget_R”).
Now we’ll make a constraint for the eye bones to point to the target bones.
In Pose mode, select one of the look target bones, and then the corresponding eye bone. Press Ctrl-Shift-C to pop up the Add Constraint (With Targets) selection. Choose “Track To” from the bottom of the middle list. This adds a bone constraint to the active bone (the one selected last) and tries to supply other selected bones as the targets. The Eye bone should be green now instead of the default grey.
Bones have their own orientation. The pointy end of the bone is called the tail, and it points along the bone’s Y axis. If your eye suddenly “looked up” when you attached the constraint, we need to specify that it should track the Y axis in the Track To constraint (and also set Up to Z). You can view the bone constraints for a bone in the properties panel. This category only shows up when bones are selected in Pose mode.
The “Add Bone Constraint” button at the top of this panel brings up a similar selection screen as using Ctrl-Shift-C, but selecting a target requires you to pick an object instead of a bone, and then in the case that the object is an Armature, you can then select the bone. Using Ctrl-Shift-C saves two steps in this case.
Adding a bone constraint is not applied symmetrically the way that editing the bone is. Select the other eye target bone, and eye bone, and then press Ctrl-Shift-C and choose “Track to”.
Switch back to Edit mode and select just the start (or Head) of both eye target bones.
Press Shift-S to open the Snap menu, and choose “Cursor to Selected”. This places the cursor midway between the starting points of the two bones. Now select both bones (the middle, connecting sections), and change the Pivot Point setting to 3D Cursor. In Blender 2.8 and above, the period key opens the Pivot Point menu, similar to the Snap Menu.
Rotate the look target bones 90 degrees, so they are pointing up instead of forwards.
Now add a new bone (still using this cursor position). Name it something like “Look_Target”. You can select the tips of the two eye target bones, and use “Snap Cursor to Selected” again, and then select the tip of the new “Look_Target” bone, and use “Snap Selected to Cursor” to position the tip so that all three bones are the same length and in a row. Set this new bone as the parent for each of the eye targets, so you can move them together. Also set Inherit Scale to “None” for each eye target. In Blender 2.79, Inherit scale is a checkbox; disable it.
Set the parent for the look target bone to be nothing, or if your character has a root bone at its base, set it to that. Uncheck the Deform option on all three of these bones and the eye bones. With this off, the bones will be excluded from contributing to bone weights.
Now the position of the look target controls the eye rotation. You can also scale the look target to bring the eyes to focus at the center (this is important for looking at something close to their face, but not important if you just want to control the direction the character is looking). You can keyframe the position and scale of this bone like the rest of the armature to include eye movement in the character’s animations.
Another option is to control the look target in Unity with a script. Bone constraints will not be part of an exported model; the following script would reproduce the effect of the Track To constraints on the eyes.
using System.Collections.Generic;
using UnityEngine;public class LookTargets : MonoBehaviour
{
public Transform Eye_L;
public Transform Eye_R;
public Transform EyeTarget_L;
public Transform EyeTarget_R;
Quaternion originalRotation_L;
Quaternion originalRotation_R; void Start()
{
originalRotation_L = Eye_L.localRotation;
originalRotation_R = Eye_R.localRotation;
} void Update()
{
Eye_L.rotation = Quaternion.LookRotation(EyeTarget_L.transform.position — Eye_L.transform.position, Vector3.down) * Quaternion.Euler(90, 0, 0);
Eye_R.rotation = Quaternion.LookRotation(EyeTarget_R.transform.position — Eye_R.transform.position, Vector3.down) * Quaternion.Euler(90, 0, 0);
} public void StopTracking()
{
Eye_L.localRotation = originalRotation_L;
Eye_L.localRotation = originalRotation_R;
enabled = false;
} public void StartTracking()
{
enabled = true;
}
}
To use a combination of animation and scripting, you might need to create one or more masks for the asset in Unity.
Let’s move on to blinking next.
Bones vs Shape Keys
There are two ways we could approach facial animation: Using more deform bones, or using Shape Keys (or blend shapes as they are called in Unity). Both of these techniques have some drawbacks that make them awkward to use, and the choice of which technique to use really depends on evaluating holistic factors. What I mean by holistic factors are things like “if I am working in a team, should I make the exported model have as few special steps as possible”; or “does this model use modifiers”, and “would I need to do extra maintenance work if I need to change something later”. Ultimately, Shape Keys are more complicated to set up with respect to exporting animation than additional deform bones, and additional deform bones are more complicated to set up before being able to produce animations with them.
I think it makes sense to show both methods even though that makes this tutorial more complicated.
Shape Keys are mesh data that define alternate positions for vertices in the mesh. They give you control to blend to the alternate positions with a factor. At a factor of 1, the vertices use the alternate positions; at factor 0, they use the original positions. At a factor value between 0 and 1, the vertex positions will be morphed partway between the two. The morph is linear: each vertex will take a straight line between the positions, which can give quite a different result from using deformation bones. The linear interpolation is also the reason Shape Keys are not used for animating the limbs of a character. Shape Keys can still work well for blinking, and mouth movement, and it gives a great deal of sculptural control to the character artist.
The peril in using Shape Key animation is that there are a lot of situations where things seem like they will work, but don’t and there’s no indication why (particularly when importing animation).
If your character uses a mirror modifier, you can make and use Shape Keys in Blender, but they will not be included in the exported model. The mirror modifier needs to be applied. Also, you cannot apply the mirror modifier if the model has any Shape Keys (so you need to delete the Shape Keys, apply the modifier, then make the Shape Keys again). The same is true for a subdivision surface modifier. There are some techniques to transfer data from a copy of the model to avoid losing too much work if you find yourself in this situation, but it could be that the deform bone technique further below is better suited to your situation if you want to preserve symmetry or subdivision on the character.
Shape Key animation is keyframed on the mesh object, and not the armature. When you create animations for a character using an armature, you create a type of animation in Blender called Actions. Actions are exported by default, and become available as animation clips on the imported asset in Unity. Shape Key animation is not included by default, and needs to be explicitly moved into a separate animation system in Blender called NLA or NonLinear Animation. Explicitly moving the animations also allows you to combine different kinds of animations (Actions and Shape Key animations, as well as others), which can be useful. Once you set the animations up in this way in Blender, they will export together and not introduce any extra complexity for the imported model in Unity.
Adding Shape Keys
With the mesh object selected, you can navigate to the Object Data tab in properties. The Vertex Groups section lists the names of the bones, which represent collections of vertex numbers (the indices of the vertices in the mesh), and their weightings to the bone with that name.
Under Vertex Groups, Shape Keys is the next section. To add a new Shape Key, press the plus button to the right of that empty list.
The first Shape Key will appear with the name “Basis”. You should leave this Shape Key and add a second one. The second Shape Key will appear with the name “Key 1”. Let’s rename that to “Blink”. Selecting a Shape Key in this list, will modify that Shape Key when you make changes to vertices in edit mode.
The Basis Shape Key is essentially the original mesh positions. Other Shape Keys have a weighting value next to them in this list, which is the place you control the factor to apply the shape key, described above. The Basis Shape Key does not have a value, and is essentially always 1. A Basis Shape Key is mandatory, and it’s confusing that you need to add it yourself in order to add another, first Shape Key.
If you want to edit the mesh (and not create deformation as a Shape Key), select the Basis Shape Key. This is important to remember the way that selecting the correct layer in a paint program is important.
Select the new “Blink” Shape Key and enter edit mode. Make a simple change to the model; select the edges of the top and bottom eyelid and scale them to 0 on the Z axis (with the pivot point setting set to “Bounding Box Center”), as an example.
Exit edit mode, and notice that the deformation is no longer visible. Toggle back into edit mode again, if you want to confirm it wasn’t actually undone. In Object Mode, drag the value next to “Blink” in the Shape Keys list to 1. The Shape Keys are applied and combined based on their values. In edit mode, you will see only the selected Shape Key, applied at a value of 1, over the Basis Shape Key.
You can select the “Blink” Shape Key and use the edit mode tools to move vertices around and create a more natural looking closed eyelid pose for it.
You can also use Blender’s sculpting tools to edit Shape Keys. In sculpt mode, the sculpting brushes will affect the selected Shape Key, but the appearance of the model will respect all the Shape Key values. Unlike when using edit mode, you should set the selected Shape Key to value 1, and all others to zero to edit that Shape Key in sculpt mode.
If you add or remove vertices, or edges or faces, those changes are made to the mesh data (essentially to the Basis Shape Key as well).
Modifying the Basis can affect Shape Keys you have already made. Blender is pretty good at keeping track of what is affected, but it is complicated. For Shape Keys, if we aren’t modifying the geometry that is moved in existing shape keys, we are free to modify (even delete and create new geometry) without changing those Shape Keys. If you change the geometry that an existing Shape Key deform, you can expect strange looking changes. You will likely need to clean up, or re-make Shape Keys if you change the geometry under them. This is true of the way Blender handles vertex colours, and UV maps for textures as well; Blender has a way to guess at appropriate values when you extrude an edge, say, but it’s not assured to be appropriate for your needs (since there are many uses for the same operation).
You can check if Shape Keys are being included by exporting just the mesh. Without the armature, there should be no bones, and no animation, and normally, that would mean the imported object has a MeshRenderer instead of a SkinnedMeshRenderer. If the Shape Keys are included, the object will be a SkinnedMeshRenderer (without the armature, or animations). The SkinnedMeshRenderer will also have a dropdown called “BlendShapes” with the Shape Keys listed there as sliders.
You can control Shape Keys procedurally in Unity. Similarly to eye movement looking at targets above, blinking could be driven by a scripted system instead of by the animation playing on the character. If you want to control blinking procedurally, you may not need to worry about the extra trouble of including Shape Keys in animations.
To control these slider values procedurally from a C# script, use a reference to the SkinnedMeshRenderer, and set the weight of the blend shape by index (0 if you only have one Shape Key).
public SkinnedMeshRenderer skinnedMeshRenderer;
public int blinkShapeKeyIndex;// the blink value is how closed the eyes are from 0 to 1
public void SetBlinkValue(float blinkValue)
{
if (skinnedMeshRenderer.sharedMesh.blendShapeCount == 0)
{
Debug.LogError(“mesh has no blend shapes”);
return;
} skinnedMeshRenderer.SetBlendShapeWeight(blinkShapeKeyIndex, blinkValue);
}
If you want to match the Shape Key by name, you should capture the correct index with a method like this:
void Start()
{
Mesh mesh = skinnedMeshRenderer.sharedMesh;
if (mesh.blendShapeCount == 0)
{
Debug.Log(“mesh has no blend shapes”);
}
else
{
for (int i = 0; i < mesh.blendShapeCount; i++)
{
if (mesh.GetBlendShapeName(i) == “Blink”)
{
blinkShapeKeyIndex = i;
}
}
}
}
Create the inside of the mouth
This Shape Key deformation method can be used to control opening the mouth as well.
Before I cover animating Shape Keys, let’s create another Shape Key for opening the mouth. But first, the character mesh needs to have mouth geometry to open. If the character model you are using has a mouth that can’t open, and the lips are modelled closed, we can modify the mesh to open the mouth and create an interior (The part that shows when the mouth is open). It can be easier to make the inside of the mouth when the model is using a Mirror modifier for symmetry, but can still be done when it is not.
Disable the Mirror modifier while working on the interior to be able to see inside the model through the open half. Use the “Clipping Region” command (Alt-B) to isolate what is visible. Dragging a box with this tool culls the rendering of anything outside the frustum created by the projection of that box to the position of the view camera. After enabling a clipping region, you can rotate the view to look in the sides of that 3D slice of your model. Clipping Region may not render correctly for some viewport shading options, and you might need to set the viewport shading to “Solid” to use it. Press Alt-B again to toggle the clipping region off.
Select the edges along the line of the mouth and run “Rip Vertices”. This will split each vertex along the edges into two, and each edge into two, essentially cutting the surface of the mesh along that line. The original vertices or edges will remain selected, and the new ones will not be selected; this results in the edges of either the top or bottom lip being selected in isolation. Move the selected edges up or down, to open the mouth a small amount.
Select all the edges along the mouth opening and extrude them in, to begin creating the mouth cavity. Leave an edge loop close to the original lip edges, and extrude again. Make the second set more circular and repeat that to trace out a spherical shape.
Close off the inside after a few more extrude steps by pressing F with the open edges selected to create a face closing it, then use “Poke Faces” to divide that face into triangles with a point at the center.
If you like, you can reduce the detail on this shape by merging vertices in the extruded segments. Because the edges we started from at the lips already had bone weights, all the points we created by extruding will have the same weights, and should be already weighted to the head bone. The same is true for the UV coordinates, but the result of having the same value is that the colour at the edge of the lips will be stretched across the inside of the mouth. You may want to update the UVs by unwrapping the inside of the mouth. Try selecting the edges where we started at the lips and running “Mark Seam”, and then selecting the inside with the L key to Select Linked (you might need to select the “Seam” setting in the redo action panel when it pops up). Now you can unwrap that selection. The inside of the mouth is not very important for texture detail, so it can be fit in between other texture islands.
Finally, you may want to bring the edges of the lips together again, or overlap them slightly to restore the representation of the lips being closed.
Now check that the blink Shape Key is still working properly. If it is, let’s add another Shape Key and name it “MouthOpen”. Select that Shape Key and go into edit mode. Place the 3D cursor where the hinge of the jaw would be, just in front of the ear on one side. We can rotate around this point to help create the Shape Key. In Blender 2.8 and later, the shortcut Shift-Right click will place the cursor on the surface under your mouse.
If you had to apply the Mirror modifier and are missing the benefits of symmetrical modelling, you can turn on the X Mirror setting for mesh editing. This is in the Options section under the Tool tab in the right panel. (in Blender 2.79 and earlier, this setting was in the left panel under the Options tab).
Enable Proportional Editing and turn on the “Connected Only” setting. Proportional Editing can be toggled on and off with the O key.
Select vertices along the jawline and rotate around the X axis. Before completing the rotation, scroll the mouse wheel up or down to change the Proportional Editing radius to bring connected points of the surrounding geometry with the selected points.
Now select parts of the lower lip, and using smaller radii with Proportional Editing, make adjustments to sculpt the open mouth.
Exit edit mode, and scrub the value for the MouthOpen Shape Key so see if it looks natural.
Don’t forget to turn off Proportional Editing.
Animating Shape Keys
If you want to include Shape Key deformation in the character animations, you can animate the values by adding keyframes. You can create a keyframe on a Shape Key by right-clicking on the value next to its name in the Shape Keys section in the Properties View (in the Object Data tab, with the mesh object selected) and selecting “Insert Keyframe”. The value background will turn yellow to show this value is keyed on this frame. This keyframe is created at the current time in the timeline, so be aware of the playhead position.
Once a key has been set, the value background will show green at any position that doesn’t have a keyframe. This colour-coding helps you to know which values are animated, and to avoid creating extra keyframes.
Create a keyframe on the blink Shape Key at the start of the timeline (frame 0), and with value 0. Then move to the last frame before the character should blink, and set another keyframe at value 0. now move to the next frame (or you can leave a few frames if you’d like the blink to happen a bit slowly), and set another keyframe, at value 1. Now advance one or more frames and set another keyframe at value 0. Now play the animation to see how that looks. You can adjust the position of the keyframes in the timeline to change the timing of the animation, and drag the playhead at the top of the timeline to scrub through the animation at arbitrary speeds.
If you export the model now, the Shape Key animation won’t be included, and won’t show up in Unity. In order to include it, we need to know about some different animation systems in Blender.
If you switch to the Animation workspace, the two windows on the left will be the Dope Sheet, and the Graph Editor by default. The Dope Sheet window displays a timeline with the keyframes of animated objects in the scene.
The Dope Sheet window has a dropdown next to the editor type button in the bottom left. By default, that will be set to Dope Sheet, but it can also be set to “Action Editor”, and “Shape Key Editor” (as well as some other modes). If you have been making character animations already, you may have set it to Action Editor.
When the Dope Sheet editor is set to Dope Sheet (its default mode), the armature pose animation shows here as well as the Shape Key animation. If your character has armature animation, the armature object will be visible here as a section, with the animated bones listed as rows (with the keys in the rows). If you created a Shape Key animation by setting a keyframe on a Shape Key, the mesh object will also show up as a section here.
If you switch from the Dope Sheet view to the Action Editor, a similar view of keys is shown, but the Action Editor shows animation data for only the active object, where the Dope Sheet view shows animation data for all animated objects in the scene. The Action Editor has a dropdown that lets you choose from a set of animations (or select the Linked Action for this object). This interface is very useful to create multiple animations for a character.
If you switch back to the Dope Sheet view, you may see that there is no way to select actions, and that the keys that are shown for the selected object are those of the action that was selected in the Action Editor view. Though a mesh object can have an action selected, the Shape Key animation cannot be selected in the Action Editor.
If you switch the Dope Sheet view’s mode to “Shape Key Editor”, you can select Shape Key animations for objects. The Shape Key Editor is almost identical to the Action Editor. With the mesh object selected, and this view open, we can select Shape Key animations and create new ones.
The Action Editor and the Shape Key Editor allow you to set the Linked Action for objects, which you can think of as setting their current animation. The Dope Sheet shows the keys of the currently linked actions on all the objects in the scene.
The export settings (for the FBX format) includes a checkbox for “All Actions”, but not one for Shape Key actions, so there isn’t a convenient way to include them, even as separate animations. There is another setting though, for something called “NLA Strips”. NLA stands for “NonLinear Animation”. The NLA system provides a way to have an animation that includes switching the linked actions on objects, which allows you to create and play an animation where an object plays the keyframes of multiple actions. This can be useful in a situation where you might want to make a short animation and have it loop over a long time without needing to copy the keyframes, or needing to copy them again later if you wanted to change the motion a little.
The Dope Sheet does not have a view for NLA Strips, but instead there is a separate editor window for Nonlinear Animation. I find it convenient to change the Graph Editor to the Nonlinear Animation window in the animation workspace layout.
If you have the armature selected in pose mode, and the Action Editor open next to the Nonlinear Animation window, you may notice that the name of the selected Action is shown in an orange row in the Nonlinear Animation window. This orange row shows the currently linked action.
You can think of the Nonlinear Animation window as the real, or superseding animation system. Arranging strips in the Nonlinear Animation window determines what the state of the objects in the scene will be when you move the playhead along the timeline. Creating an animation sets the linked action on an object, which causes it to show up in the Nonlinear Animation window (as well as in the Dope Sheet, and in the Action Editor), but when you create an NLA Strip, you can place actions there, and unlink the actions from the objects; and the contents of the NLA Strips are what plays even when the Dope Sheet and Action Editors have no keys in them.
In order to create an NLA Strip, we can “Push Down” this action either in the Action Editor, or using a button on that orange row in the Nonlinear Animation window. Pushing an action down creates an NLA Track, with that action in it, and clears the current linked action of the object.
If you export now, the imported asset in Unity will include an additional animation (you might need to reimport the asset in Unity sometimes for animations to update, or sometimes just select another object to get the inspector to refresh). The name of that animation will be, not the name of the NLA Track, but the name of the strip in that track. The strip will have the name of the action you pushed down, but the name of the strip won’t update if you rename that action. You cannot rename these strips. You can, however, delete the NLA Track, select the action in the Action Editor again, and push it down again to get the updated name.
Once the action is in an NLA Strip, you can move it left and right in the NLA Track, but also move it up and down to different rows, which are different NLA Tracks. The timeline in the Nonlinear Animation window is separated vertically, grouping NLA Tracks by the objects those animations will play on. An NLA Strip that is an armature action can’t be dragged into an NLA Track for Shape Keys.
If you push down a Shape Key action and export, the asset in Unity will not include an animation for the Shape Key NLA Strip. But you might find that the Shape Key animation is now part of the NLA Strip you made from an armature action. Shape Key animation is being included in exported NLA Strips, and the position of the strips in the Nonlinear Animation window is determining the timing.
Essentially, you should only need one NLA Track for the armature, with strips for actions that need to include Shape Key animation, set up in sequence, and one NLA Track for the mesh object, with strips for the Shape Key actions lined up to match the action strips of the armature.
Each Armature action NLA Strip will be exported as an animation, and will include the shape keys (and other animation data) that overlaps with that strip in the timeline.
Limitations for Shape Keys
As I said before, Shape Keys give you a great deal of direct artistic control; posing the eyelids or mouth uses Blender’s modelling tools, and you just put the geometry in a different position directly, without having to set up more bones, or manage bone weights on the mesh. However, you may have encountered or imagined some limitations already. You might have found that after posing the eyelids closed for the blink, though they looked correct open and closed, the eyeball clipped through the eyelid when the value was half-way (and then corrected it by pushing the eyelids out a bit when closed). This happens because Shape Keys are interpolated linearly; each vertex takes a straight line to its destination. When you blink, your eyelids curve around the shape of the eyes, and a linear interpolation on positions can’t create a curve. You could add a second Shape Key, and get a better approximation in sequence; or by controlling the values for two Shape Keys, have a bulge happen in the middle and fade off at the end; but that is a lot to manage. If you want to match a rotation, Shape Keys are more work than deformation bones.
How about the mouth? We created an open pose and can toggle open and closed, which might be just what you need to make it easy to tell which character is talking in a game. But you might have wanted to add some expressions after seeing how the character still looked emotionless, or fixed in one emotion. Or maybe you just want to better match the words the character is saying. If that’s the case, you can add Shape Keys for a smile, and a frown (and it’s easy to include the eyebrows in those without extra work), and there is a minimal set of mouth shapes to support for animation that you can look up, which can be made as shape keys as well. The mouth shapes match the sounds we make when we speak, which are called phonemes. But you might find yourself wishing you had an easier way to pose the lips, even while producing the phonemes set.
The alternative is to handle face animation with deformation bones, as part of the armature. A nice benefit to that is that you don’t need to use the NLA window to manage and combine different animation types (unless you still want to use Shape Keys for part of the animation). And the difficulty is that it’s awkward to set up and manage; for creating the bones, for seeing what you are doing, and for managing the keyframes for additional bones in the skeleton and reusing poses.
Blinking (with deformation bones)
To make the character blink without Shape Keys, we can create a set of bones along the eyelids. If you want to use the Shape Key blink from the steps above, you can still use this technique to make eyebrows.
The deform bones along the eyelids are going to stretch between another set of bones which will be the control points.
To set up the eyelids and make the character blink, we will:
- create the control bones
- create bones connecting the control bones
- set up constraints to make those bones stretch when the control bones move
- set the envelope radius on the deform bones, and add them with envelope weights
Editing the armature after a mesh is attached with weights, or after setting animation keys, can cause problems. The normal workflow for character models is to create the mesh; create the armature; attach the mesh (creating the bone weights); adjust the weights (weight painting); then begin animating. It’s good to not do too much work on a step until the step before feels finalized, but these steps are interdependent, and it’s important to be able to experiment and to make changes while testing the effects.
We are going to add some new bones to be control points along the top and bottom eyelid, and then connect those bones with another set that will be the deformation bones.
Before we do that, let’s move the neck and head bones to a different layer, so we can hide them when they would obstruct our view. Select the armature in edit mode, and select the head and neck bones. Run “Change Bone Layers”, or press M to pop up a window with two sets of sixteen boxes. You can use these thirty-two slots for your own organization.
Right now, all the bones in this armature are in the first bone layer. The selected bones are in the first layer, so the first box is coloured blue in this pop-up. Click on the second box to move the selected bones to the second bone layer. The head and neck bones will disappear; this is because the armature is set to show only the first bone layer. You can toggle visible bone layers in the Armature tab of the properties panel.
You can hold shift to select multiple layers. Bones can also be placed in multiple layers by holding shift when clicking on the Bone Layer pop-up. Hide the new bone layer to hide the head and neck bones.
Now let’s enable Snapping, and set the Snapping mode to Face, this means that while we move a bone in the viewport, it’s position will snap to the surface of our model.
To add the first bone, use Shift-Right-Click to position the 3D cursor at the inside corner of the character’s left eye.
Now with the armature selected, and in edit mode, add a new bone. This will create a bone starting at the cursor position, but pointing up, and one unit long. We won’t be able to see any of the face if the bones are this size, so let’s adjust that. In the right panel, in the View tab, and under the 3D Cursor section, the X, Y, and Z components of the cursor’s location are listed. These values are the location where we just placed the cursor, which is the starting position of the bone we just made. If you click into the Y field you can enter an arithmetic expression here. Place the text caret at the end of the current value, and add “- 0.02” and press enter. This will evaluate to the previous Y value minus 0.02. The cursor should move forward 2cm. Now select the tip of the bone we made and use the command search to run “Snap Selection To Cursor”, or open the snap menu by Shift-S and select it. Now we have a small bone pointing forward at the corner of the character’s eye. Name this bone “UpperEyelidControl_L”, or something similar.
Select the bone and press Shift-D to create and drag a duplicate. While you drag this bone, it continues to follow the surface of the model when the Snap mode is set to Face. Place five control bones in an arc across the top eyelid.
Now we are going to create bones connecting between these. Select the base of the first bone you created (so that just the ball at the wide end is highlighted), then select the base of the next control bone along the eyelid and press F to create a bone connecting those points.
When you add a new bone, Blender selects the tip of that bone. Leaving the tip of the new bone selected, shift-select the base of the next control bone, and press F again to create the next one. Now select the base of the next control bone and press F, and create one more with the last control bone.
Rename each bone in this new sequence to have names like “UpperEyelid0_L”, and numbered in sequence. Turn off Snapping mode when you are done positioning the bones on the surface of the model.
Creating the bones in this way causes each bone to have its parent set to the previous bone (and the first bone has its parent set to the first control bone we used). We want to change the parent of all of these bones (including the control bones we created first) to be the head bone. Blender supports editing properties of multiple selected objects to a degree. Only the properties of the active object are shown, but for some properties, you can hold the Alt key when making a change and have that value apply to all selected objects. We could select all of the bones and hold Alt while we select the head bone from the parent drop-down list, but that will cause some strange results on some of the bones. The “Connected” setting needs to be disabled first. “Connected” should only be on for the last three connecting bones (the first one was created with the first control bone as a parent; when a bone is connected to the base of another bone, it is not considered connected). Select the last three connecting bones and hold Alt when unchecking the “Connected” setting. Now select the rest of the new bones.
Now go to the Bone Properties and click the Parent field to bring up a choice of other bones in the armature. Select the Head bone, and then hold Alt and press Enter to set that as the parent for all selected bones. Now the upper eyelid bones are all direct children of the Head bone, and not connected to each other.
Next we’ll set up constraints on the deform bones so they stretch between the control bones when the control bones are moved in pose mode. Switch to Pose mode. Select the first control bone and then the first connecting bone. Press Ctrl-Shift-C to “Add Constraint (With Targets)” for the active bone (the active bone is the last selected bone, showing with a lighter highlight). From the list that appears, choose “Copy Location”.
Then select the next control bone, and the same connecting bone again (the first bone again, to add a second constraint). Press Ctrl-Shift-C again and choose “Stretch To”. You can try grabbing either of those control bones and moving them to check that it’s working for that first connecting bone. Press Alt-G to reset the position of selected bones.
Now repeat this process for the other connecting bones: Select the previous control bone, then the connecting bone, and add “Copy Location”. Then select the next control bone and the same connecting bone, and add “Stretch To”.
Now we are ready to add the vertex weights for these bones.
You can manually add vertex groups to a mesh with names that match the names of the bones. Or you can highlight each bone while in Weight Painting mode for the mesh and just start painting (the armature needs to be in pose mode; use shift-select to highlight bones). There are also some ways to get usable starting weights for bones, even when attaching them after having finished weight painting for previous bones.
Bones can be added, contributing automatic weights to the mesh, without destroying the weights of other bones. You can add the weight group for a new bone with the mesh selected and in the weight painting mode. Shift-select the bone you want to add weights for, and then search for the command “Weights > Assign Automatic From Bones”. Automatic weights are a good starting place for most other bones in a character armature, but invoking automatic weights like this is not likely to give us a good result for the deformation bones of the face. We can use another system called Bone Envelopes to get starting weights for these. Using Bone Envelopes can give us a hand setting up the weights for these new bones, and means we don’t need to rely on weight-painting skills as much. If you are comfortable painting bone-weights, you can skip the Bone Envelope section below.
With the Armature selected, go to the Armature properties tab, and in the section called “Viewport Display”, there is a setting called “Display As”. By default, this is set to “Octahedral”, and shows the bones as you are probably used to seeing them; with a wide end and a pointy end, with balls at the beginning and end, and four sided like pyramids. The other options for this setting can be useful to switch to at times when you’re having trouble seeing the model, the bones, and the weight data all overlapping. If we switch this to “Envelope” though, in addition to the bones looking like tubes now, selected bones will also have a larger transparent outline. This outline is the weight envelope. With the first connecting eyelid bone selected, go to the Bone Properties tab, and under the Deform section, change the Envelope Distance setting. You can also change the Radius Head, and Tail properties, to control the shape of this bone. Envelope weights can be used as an alternative to Blender’s automatic weights when first attaching a mesh to an armature. But you can also invoke the use of that system when adding new weights for a bone.
We’ll set these values to something very low for the connecting eyelid bones, but we should set low values on the control bones as well, just so they don’t obstruct our view of the connecting bones. Select the control bones and the connecting bones. Set “Radius Head” to 0.001, and hold Alt when you press Enter. Set “Tail” to 0.001 as well, holding Alt when you press Enter. Now set “Envelope Distance” to 0.01, and hold Alt When you press Enter.
The bones should look like this now, but if they don’t, try not to take it as a personal failing; the multi-select editing in Blender is not very good! Several things could have gone wrong here, and you might need to fall back on editing them one at a time. If you deselect the active object, your selection can contain no active object; and if the properties view shows the active object, what does it show when there is no active object in the selection? It shows whatever was the last selected active object, even though that object is not selected now. If I forget to hold Alt when I enter a new number, only the active object changes that value. If I enter the same value, the value doesn’t change (ok that makes sense); but if I hold alt and enter the same value as the active object, no values will change. If some of the bones have different values, entering a new one that is different from all of them will give that value to the active bone, and give scaled values to the others (which is truly a bizarre behaviour from my perspective).
Let’s select just the eyelid control bones now, and in the Bone Properties, hold Alt and uncheck the toggle for Deform. These bones won’t have any influence over the mesh, and are just targets for the bones connecting them. With Deform disabled, Envelope Distance, Radius Head, and Tail are disabled in the bone properties tab. We reduced the values first, while the fields were still editable.
In edit mode, select all the eyelid bones again and search for and run “Symmetrize”. Using Symmetrize won’t create new vertex groups for mirrored bones, so right now, before adding the vertex groups is the right time to mirror the bones.
Select the mesh, and press Ctrl-Tab to bring up a mode selection menu, and choose weight-painting mode. Shift-select the connecting eyelid bones from each side. Then search and run the command “Weights > Assign From Bone Envelopes”.
Try clearing the bone selection with Alt-A and selecting one or more control bones and moving them around. The eyelid should move with them, but stays behind a bit; you would need to overshoot by a lot to make the eyelids meet for a blink. The weights were added according to the envelopes, but the existing weights did not change. The eye geometry was previously weighted to the head bone, so in order to give the new eyelid bones control of the vertices they are weighted to, we still need to erase the weights from the head bone on those vertices.
I introduced the Bone Envelopes settings to try to avoid weight painting as much as possible, but there isn’t a command to erase the envelopes from other bone groups, so you still need to manually erase the vertices from the head bone’s vertex group to match the weights added by the eyelid envelopes. This is the best I could do without adding a section on scripting.
To switch to an eraser, go to the Tool tab in the Right panel, and in the Brush Settings section, change the “Blend” property from “Mix” to “Subtract”.
To work more slowly, try setting the “Strength” setting lower, and taking more clicks/strokes to approach the values you want.
If you Shift-Click a bone while you are in weight painting mode, the weights for that bone will be displayed. Selecting a bone with no vertex group will show the whole model in magenta. You can also select the vertex groups list in the Object Data properties tab, which can let you view and edit the weights for bones that are hidden.
Now you should do a test where you pose the head and eyelids to one side, just to check that there are no vertices left with no bone weights at all.
The lower eyelid can be created by following this process a second time; or by duplicating the whole set of bones, repositioning them on the surface, and renaming them (with “Lower” eyelid). If you duplicate, keep in mind that in order to reposition the control bones, you should select the control bone, as well as the ends of any connecting bones hidden under its base. Alternately, you could move just the control bones in pose mode (where the constraints will position the connecting bones), then select the control bones and connecting bones and run “Apply Selected As Rest Pose”.
Use Ctrl-Shift-C to Add Bone Constraints With Targets to add the Copy Position constraint, with the control bone at the head end selected with each connecting bone; and the Stretch To constraint with the control bone at the tail end selected with each connecting bone.
Set the Bone Envelope values as described above.
Run “Symmetrize” to create the mirrored bones for the other eye.
Run “Weights > Assign From Bone Envelopes” with the lower eyelid connecting bones, and remove the influence from the head bone for those as well.
Now these eyelid bones are done and ready to keyframe.
These bones give you a lot of control over the eyelids; but that can be a lot to manage. These bones will have a lot of keyframes to copy just in order to blink. Something I encourage you to look up on your own is using Action Constraints on the control bones. Essentially you can create an animation (an Action) for the character that is just blinking with the eyelids, and then by adding an Action Constraint to each of the control bones, and referencing that action and its start and end frames, you can control how far through the animation the bones should appear. An additional bone can be created, and set as the target for those Action Constraints (or you could use the eye look target, for example), and then by animating the target bone, we can control how far through that animation the eyelids should currently be. It is a bit of a confusing indirection, but for blinking and reusable facial expressions, it can work well.
Eyebrow Expression
For the eyebrows, we can use the same system of control bones connected by stretching bones that we did for the eyelids. The eyebrows and eyelids can work together to form facial expressions along with the mouth.
If you are feeling more comfortable with weight painting, you can skip the Bone Envelope settings and paint in the bone weights manually.
A mouth with posable lips
In this section we will:
- finish the inside of the mouth with simple teeth
- create a jaw bone
- create a set of bones with control bones similar to the eyelids
If you skipped the Shape Keys section above and your character doesn’t have an inside to their mouth yet, go back and follow that section to create the inside of the mouth now. We are going to use the same mouth, but add some teeth and gums inside.
Once you have the inside of the mouth cavity, we will add in top and bottom rows of teeth, with gums. The teeth rows will be separate meshes, and connected to the animation bones for the head and the bottom jaw. We won’t need to worry about bone weights on them, and we create gums just to cover the space above or below the teeth.
Place the cursor and add a new cube, then go into edit mode. Scale the whole thing down by pressing A, S, 0.01 (a tooth is closer to 2 cm across than 2 meters).
Now let’s scale it shorter in the Y direction a bit more.
Select the edges following the X direction (with a box selection down the front with wireframe on, or by holding Alt-Ctrl and clicking one of them), and then use “Subdivide”. Open the redo command panel if it shows up collapsed, and set the “Number of Cuts” to 2. Now select all the edges that don’t follow the X direction, and then use Subdivide again (with Number of Cuts: 1).
Teeth are categorized into 4 types: incisors, canines, molars, and premolars. Incisors are the long-edged, square looking teeth in the front of your mouth; canines are the more pointy ones to the sides of those, molars are large, boxy ones in the back; and premolars are a bit more round, but also like molars. We are going to use this subdivided box to make one of each type, and then arrange and scale copies together. None of the types will need bottom faces, so delete the faces on the bottom of this subdivided cube. I set “Shade Smooth” so the teeth appear rounded. It can be handy to do a simple UV Unwrap on the subdivided box before making the types of teeth, in case you want to add them into the character’s texture map.
Here are the four types of teeth, made from the same subdivided box.
Use a Mirror modifier, and arrange copies of these teeth to form the teeth of the bottom jaw. The teeth of the bottom jaw form a semicircle just inside the lips. You can make an arc first as a guide if you like. Use isolation mode, and the Alt-B frustum clipping to make it a bit easier to work where things are intersecting. Duplicate this row of teeth and move them up. Flip them vertically to become the top row of teeth. Select individual teeth with the L key, and adjust the scale. Scale the top center incisors a bit larger, and reposition the rest to fit.
Copy a pair of edges making a V from one of the teeth on the top or bottom, and center those points. We are going to extrude this edge to create the gums.
Select those edges and, from a top view, extrude them, adjusting the rotation, scale, and position at each step to create segments at the center of each tooth, and at the center of the space between each pair of teeth. When you get to the larger molars, you can make an extra segment for each tooth (a pair of edges before the tooth, 1/3rd along the tooth, and 2/3rds along the tooth). Select the pairs of edges you placed between each pair of teeth, then pull them up a bit.
Grab the edges along the inside and outside and extrude them to make a bit more of the gums. Copy and flip this to make the gums for the top. Adjust the edge positions to match the different teeth positions.
Split this object into two so that the top and bottom teeth will be able to move independently. Name the objects “TopTeeth”, and “BottomTeeth”.
As an additional step, I widened the inside of the mouth cavity to match the teeth a bit better.
Make the jaw bone
Place a new bone for the jaw; this can be created by extruding a bone from the head bone to the start position, and then extruding another to the tip of the jaw, and then deleting the first bone; or you could duplicate the head bone and set the parent to the head; or position the cursor and add a bone, and set the parent. Name this bone “Jaw”.
Select the bottom teeth and Shift-select the jaw bone. Then use Ctrl-P to set the parent and select “Bone”. Set the parent of the top teeth to the head bone in the same way.
At this point, the jaw will move the bottom teeth, but won’t open the lips at all. We need to paint the weights of the jaw to the new bone, and remove the weights of those points from the head bone.
They should look something like this. You can rotate the jaw open and continue painting, which can help get the lips weighted separately. Press Alt-R to reset the rotation. The bottom of the inside of the mouth should also be weighted to the jaw. This is probably the hardest step in this tutorial. Weight painting takes a lot of practice.
This is a bone-based mouth that works as a substitution for the Shape-Key version above, though it offers much less control for posing the lips. We can add deformation bones for the lips in the same way we did for the eyes. The only difference will be that the mouth is on the line of symmetry of the face. We’ll just make one half of the top and bottom lips, and use Symmetrize to create the other half.
Lips
Set Snapping Mode to “Face”, and duplicate one of the control bones from the eye. Rename it to “UpperLipControl”, and then delete the mirrored bone if one was created. Place it at the center of the upper lip. Then center the bone exactly by setting the “Head X” and “Tail X” values to zero in the Transform section of the Bone tab in the Properties view. Duplicate this bone and place it at the center of the bottom lip, and center it exactly as well. Rename this bone to “LowerLipControl”.
Duplicate the upper lip bone and name the duplicate “UpperLipControl_L”. Duplicate the lower lip bone and name the duplicate “LowerLipControl_L”.
Now duplicate those bones twice more along the top lip, and bottom lip.
Turn off snapping.
As you did with the eyes, select the bases of pairs of control bones, and press F to create the bone between them, starting from the center of the lips. Rename these bones in the pattern of “UpperLip_0_L”, and “LowerLip_0_L”. Disable “Connected”, and set the parent of these bones to the head bone.
Switch to pose mode to add the bone constraints like we did for the eyes. For each connecting bone, select the previous control bone, then the connecting bone, and press Ctrl-Shift-C to add a bone constraint and select “Copy Location”. Select the next control bone, and the same connecting bone, and press Ctrl-Shift-C and select “Stretch To”.
Select all the connecting bones, and switch the armature Display As property to “Envelope”. Holding Alt when you hit Enter, set the Envelope Distance to 0.01, and the Radius Head, and Tail values to 0.001 for the lip control and connecting bones.
Switch back to edit mode and select all the mouth bones. Use “Symmetrize” to create the mirrored bones. All the bones named with “_L” will get mirror copies, and the two center ones will not. Check that symmetrize worked correctly and that the constraints also work.
Select the mesh in Weight Painting mode and Shift-select the connecting bones. Run “Weights > Assign From Bone Envelopes”. Remove the weights from the head and jaw bones in weight-painting mode. Paint to adjust the weights and especially remove crossover between the top and bottom lips.
Adjustment can take some time, and be very frustrating because of all the concurrent visual systems involved. Take breaks as needed.
Now the mouth is ready to keyframe.
For a little more help making some expressions, you might want to add some deformation controls on the cheeks. These can use the same construction principle as the eyelids, eyebrows, and lips.
More things to try on your own
Consider making an action the way that I suggested for blinking, but have that animation run through the set of phonemes. Then you can drive the mouth shape by syncing with a time in that action using the rotation or position of a bone.
Try making the bottom lip come down when the jaw opens by adding bone constraints to the control bones of the bottom lip.
Add a tongue.
Find a video clip of a character speaking or emoting (or just audio can work as well), and create an animation to match the dialogue or expressions.