How to animate mesh in webgl

1.8k Views Asked by At

I'm new in the field WebGL, and I should animate a human face, i have the mesh of polygons, downloaded from https://sketchfab.com/models/4d07eb2030db4406bc7eee971d1d3a97, how do I select the points on the eyes, mouth etc. and move them to create the expressions? thanks in advance

2

There are 2 best solutions below

2
On

WebGL is just a rasterization library. It draws lines, points and triangles. That's it. Everything else is up to you.

Loading and drawing a 3D model requires hundreds or thousands of lines of code. Being able to select parts of a model (eyes, mouth, tongue) requires even more code and structure none of which has anything to with WebGL. Animating such a model requires even more code also which has nothing to do with WebGL.

I'd suggest you use a library like three.js that supports loading models, selecting parts of them, and animating them. Telling you how to do all of that in WebGL directly would basically be an entire book and is far too broad a topic for one question on Stack Overflow.

Otherwise to do it in WebGL is a lot of work.

First you need to write a 3D engine because as I said above WebGL is just a rasterization library. It doesn't do 3D for you. You have to write the code to make it do 3D.

So you want to load a 3d model. You linked to this image

bust

To render that image in WebGL you need to write multiple kinds of shaders. Looking at that image you'd need to write at a minimum some kind of shadow casting system, some kind of normal mapping shader with lighting, and bloom post processsing system (see the glow on the top of his head). Each of those topics is an entire chapter in a book about 3D graphics. WebGL doesn't do it for you. It just draws triangles. You have to provide all the code to make WebGL draw that stuff.

On top of that you need to make some kind of scene graph to represent the different parts of the head (eyes, ears, nose, mouth, ...) That assumes the model is set up into parts. It might just be a single mesh.

Assuming it is set up into parts you'll need to implement a skinning system. That's another whole chapter of a book on 3d graphics. Skinning systems will let you open and close the eyelids or the mouth for example. Without a skinning system the polygons making the head will fly apart. Another option would be to use a shape blending system were you morph between multiple models that share the same topology but it will be hard to animate the eyes and mouth separately using such a system.

After all of that you can start to implement an animation system that lets you move the bones of your skinning system to animate.

Then on top of all of that you'll need to figure out how to take the data from the model you linked to and turn it into data the engine you just spent several months writing above can use.

I'm only guessing you probably didn't know how much work it was going to be because WebGL doesn't do any of it for you since it just draws triangles. If you really want to learn all of that and do it all yourself I'd start with http://webglfundamentals.org to learn the basics of WebGL and from their expand until you can do all of this. It would be a great learning experience. I'm only guessing it will take several months until you can load that head and animate the parts in your own WebGL code.

Or you can skip all that and just use a library that already does most of that for you.

1
On

I just manually decompressed the download and from what I can tell its just a set of textures, bumpmaps, etc, with baked in static lighting ... to enable you to render the model as a static 3D model using WebGL/OpenGL ... a quick cursory examination suggests it would be up to you to identify all the various putative moving/animated bits and bobs

This is not to discourage you ... in fact such a static dataset could be a nice bedrock from which to roll your own animation atop what this gives you

Once you render it statically, the next step would be to create a picker ... which is a process where you interactively move the : (model and/or camera and/or eye) such that you identify the XYZ location of desired segments (vertex/polygons) , then groupings of such segments.

Lets say you have used your picker to demarcate the entire head from the neck, or say one eye from the face. Now you would separate each into its own object in your model. This separation permits you to move each object as a whole independently of the other objects. Whereas initially the entire model was a single object and had to move as a monolithic blob

Now that the original source dataset has become a set of independently movable objects you can introduce new graphic features which were not part of the original dataset. Here is where it gets creative. Programmatically you can dynamically move subsections of the mesh. Groups of vertex and edges. Challenge is you would also have to simultaneously move all the corresponding entries across the various textures files for these vertex/edges which would be a nightmare and is not an intended use case for the given dataset.

The dataset you have chosen is extremely lifelike due to the layered set of texture files all in exact correspondence to the current state of the static rendering/lighting. Perhaps what would benefit you, to aid dynamically animating the mesh, is to start with a simple mesh, or in many ways easier dynamically synthesize your own mesh "ab initio", which you can more easily identify the distinct objects to animate them, perform all your own lighting etc. yourself to give it that post processing snap