Lwjgl+Assimp animation importing

Started by Orty97, February 11, 2022, 12:37:36

Previous topic - Next topic

Orty97

Hello.
Been using LWJGL for some time now learning from documentations/forums/videos etc.
But i've hit a hard roadblock with animations that pushed me to make an account to post here.

What am i trying to do :
-skeletal animation (mesh skinning) : I model in blender -> export as collada file -> import with lwjgl-assimp;

*a small problem that might give some insight into my problem (or it might point to it being a blender export related problem) -> if i add more then 1 action to a mesh it gets "pushed" in NLA tracks and they just don't appear as "animations" in the imported files. The number of animations in the imported file seems to be "0", unless i have 1 actions when exporting the model *.

I am not looking for code corrections as i can pretty much figure it out (and i actually enjoy the "stuggle"). But it seems that i don't understand the process 100% or maybe i'm just missing something. I even tried with simple meshes (a bunch of cubes with a few bones to test stuff out).

So right now i am stuck at using the bones transforms form the aiAnimNode's correctly. To go in more detail:

I can load a skinned mesh using the "bind-pose" transforms stored in the aiNode's.
Now for the "animated pose" of the mesh i would need the following:

1) Bone-Weight data
2) Bone "bind-pose" transform in case there is no "animation data" for that particular bone
3) The offsetMatrix specific to that bone
4) Animation data for that bone (position/rotation/scaling data form the aiAnimNode)

From what i understand the process should be as following :

-you call the root_bone "update local-transform" method with a identity 4x4 Matrix as a argument. This method will recursively go trough all the child nodes (bones) passing the resulting "local transform" of that particular bone. The "local transform" of the bone should either be the regular bindPose transform stored in the aiNodes multiplied by the parrent transform or the "animated transform" that you calculated from the animationData from the aiNodeAnim multiplied by the parrent transform. Now you have a local transform for all the bones of the mesh. (this process i've seen being named as "transform concatenation").

-Now we have all the data needed.  I will try to talk at vertex level what these matrices I THINK do, this might be where i am wrong. So each vertex is taken from global space to "bone space" by the offsetMatrix(which is just the inverse bind-transform), and then is moved in position by the new bone "local Transform". These positions differ from one bone to another that influence the vertex so that is where the weights come into play. Using these weights as scalars to multiply the matrices as we can get a "average transformation matrix" for each vertex of the mesh and apply it to the position (and corresponding normal).

Now if that is all correct I am quite lost as to why when i try to use the animation data, my mesh gets "stretched like crazy". I even added "armature-bones" similar to blender to see if my transforms are not calculated correctly form the hierarchy . If it is in my logic that would be amazing, but if not i will just take another close look into my code, i guess.

I know it's a long rant, but i would appreciate very much any help regarding the issue. Thanks in advance  ;D