Discussion:
[Algorithms] Finding the best pose to re-enter animation graph from ragdoll
Richard Fine
2012-12-13 18:22:38 UTC
Permalink
Hi all,

I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.

As I see it there are two components to computing a score for each
potential recovery point:

1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.

2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).

Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?

It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.

This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.

Cheers!

- Richard
Jeff Russell
2012-12-13 19:39:30 UTC
Permalink
There are probably a number of ways to do it. My first guess would be to
compute the difference in rotation for the root bone (that is, what
rotation takes you from your starting frame to the current ragdoll
orientation), and then examine the "up" vector of the resulting transform.
If it's too far from vertical, you don't have a very good match. You can
compute a score perhaps based on the dot product between the "up" basis of
this transform and the global up direction.
Post by Richard Fine
Hi all,
I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.
As I see it there are two components to computing a score for each
1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.
2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).
Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?
It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.
This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
--
Jeff Russell
Engineer, Marmoset
www.marmoset.co
Alex Lindsay
2012-12-13 19:55:18 UTC
Permalink
I figure you've got 3 normalized vectors:

dollForward = stomach of ragdoll vector, pointing along Y, downwards (not
sure if positive or negative)
dollUp = along spine towards head of ragdoll
recoveryForward = stomach of recovery pose

dollForward DOT recoveryForward will be near 1 if the doll is on its
stomach. The same dot can be run against other categories of recovery pose
for lying on side or back. Camera look-at style cross products with dollUp
and dollForward will get you 3 axes and from them, a quat or matrix to
apply to the recovery pose root, or take vectors to or from 'doll space' to
'recovery space' to match other limbs against your
recovery-poses-on-stomach db.

Just writing aloud, hope it helps!
Post by Jeff Russell
There are probably a number of ways to do it. My first guess would be to
compute the difference in rotation for the root bone (that is, what
rotation takes you from your starting frame to the current ragdoll
orientation), and then examine the "up" vector of the resulting transform.
If it's too far from vertical, you don't have a very good match. You can
compute a score perhaps based on the dot product between the "up" basis of
this transform and the global up direction.
Post by Richard Fine
Hi all,
I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.
As I see it there are two components to computing a score for each
1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.
2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).
Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?
It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.
This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
--
Jeff Russell
Engineer, Marmoset
www.marmoset.co
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Michael De Ruyter
2012-12-13 20:09:46 UTC
Permalink
Hi Richard,

I believe you will face a couple of problems with the approach you describe
in 1)

- when comparing local quaternions for the joints, the comparison will take
into account the twist around the limbs even though any amount of twist
doesn't change the position of the limb. Therefore you could get drastic
differences when visually there are barely any.

- even if the joint orientation are different the position of the limbs,
especially their endings like hands of feet, could still be in very close
positions, i.e. potentially closer than limbs with similar rotations but
with their root joint (shoulder for instance) off by a bit. You mention
weighting system, but that is going to be a pain to tune.

An other approach would be to find a comparison algorithm that compares the
overall position of the limbs.

For instance you could consider;
- modeling triangles based of significant body joints, for instance
+ hips, shoulder, hand
+ hips, hand, foot
+ hips, shoulder, shoulder

Then use the normal of those triangles for your pose comparison.

You would still need to make the normals relative to the hips and then use
a hips orientation comparison process.

I haven't actually implemented this, but that's how I would go about it. I
hope this gives you a different perspective or more ideas.

Michael
Post by Jeff Russell
There are probably a number of ways to do it. My first guess would be to
compute the difference in rotation for the root bone (that is, what
rotation takes you from your starting frame to the current ragdoll
orientation), and then examine the "up" vector of the resulting transform.
If it's too far from vertical, you don't have a very good match. You can
compute a score perhaps based on the dot product between the "up" basis of
this transform and the global up direction.
Post by Richard Fine
Hi all,
I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.
As I see it there are two components to computing a score for each
1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.
2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).
Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?
It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.
This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
--
Jeff Russell
Engineer, Marmoset
www.marmoset.co
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Richard Fine
2012-12-13 20:44:52 UTC
Permalink
- when comparing local quaternions for the joints, the comparison will take into
account the twist around the limbs even though any amount of twist doesn't
change the position of the limb. Therefore you could get drastic differences
when visually there are barely any.
True in the general case, but I'm using a hierarchical setup, so a twist
in an upper arm can drastically change the position/orientation of a
forearm and so on. It matters less and less as you approach the leaf
nodes, but this is something I intended to eliminate with weighting.
- even if the joint orientation are different the position of the limbs,
especially their endings like hands of feet, could still be in very close
positions, i.e. potentially closer than limbs with similar rotations but with
their root joint (shoulder for instance) off by a bit. You mention weighting
system, but that is going to be a pain to tune.
Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would
be less visually different than one with trunk-node discrepancies, but
that's not a sound assumption.

This sounds like it would be a problem for *any* hierarchy-based
approach, so anything based on comparing local-space positions is
probably a non-starter.
An other approach would be to find a comparison algorithm that compares the
overall position of the limbs.
For instance you could consider;
- modeling triangles based of significant body joints, for instance
+ hips, shoulder, hand
+ hips, hand, foot
+ hips, shoulder, shoulder
Then use the normal of those triangles for your pose comparison.
You would still need to make the normals relative to the hips and then use a
hips orientation comparison process.
Right OK. Sounds similar to the point cloud approach, maybe a little
less prone to small discrepancies, as using the normal instead of the
joint positions would equate similar triangles.

Cheers!

- Richard
Michael De Ruyter
2012-12-13 21:12:58 UTC
Permalink
Also with the triangle normal idea, a notion of distance is missing,
so you should add something like distance hips - hand, hips - foot,
foot - foot, etc to your comparison algorithm if u were to try this
method.

Though since there is already a paper on the point cloud approach,
it's probably safer to start with it.

Sent from my iPhone
Post by Richard Fine
- when comparing local quaternions for the joints, the comparison will take into
account the twist around the limbs even though any amount of twist doesn't
change the position of the limb. Therefore you could get drastic differences
when visually there are barely any.
True in the general case, but I'm using a hierarchical setup, so a twist
in an upper arm can drastically change the position/orientation of a
forearm and so on. It matters less and less as you approach the leaf
nodes, but this is something I intended to eliminate with weighting.
- even if the joint orientation are different the position of the limbs,
especially their endings like hands of feet, could still be in very close
positions, i.e. potentially closer than limbs with similar rotations but with
their root joint (shoulder for instance) off by a bit. You mention weighting
system, but that is going to be a pain to tune.
Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would
be less visually different than one with trunk-node discrepancies, but
that's not a sound assumption.
This sounds like it would be a problem for *any* hierarchy-based
approach, so anything based on comparing local-space positions is
probably a non-starter.
An other approach would be to find a comparison algorithm that compares the
overall position of the limbs.
For instance you could consider;
- modeling triangles based of significant body joints, for instance
+ hips, shoulder, hand
+ hips, hand, foot
+ hips, shoulder, shoulder
Then use the normal of those triangles for your pose comparison.
You would still need to make the normals relative to the hips and then use a
hips orientation comparison process.
Right OK. Sounds similar to the point cloud approach, maybe a little
less prone to small discrepancies, as using the normal instead of the
joint positions would equate similar triangles.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Ben Sunshine-Hill
2012-12-14 10:35:20 UTC
Permalink
Ironically, that paper actually compares joint orientations instead of
point clouds. For point cloud-based pose similarity estimation, Kovar's
original motion graph paper is probably a good reference.

http://pages.cs.wisc.edu/~kovar/mographs.pdf

Ben

On Thu, Dec 13, 2012 at 9:12 PM, Michael De Ruyter <
Post by Michael De Ruyter
Also with the triangle normal idea, a notion of distance is missing,
so you should add something like distance hips - hand, hips - foot,
foot - foot, etc to your comparison algorithm if u were to try this
method.
Though since there is already a paper on the point cloud approach,
it's probably safer to start with it.
Sent from my iPhone
Post by Richard Fine
Post by Michael De Ruyter
- when comparing local quaternions for the joints, the comparison will
take into
Post by Richard Fine
Post by Michael De Ruyter
account the twist around the limbs even though any amount of twist
doesn't
Post by Richard Fine
Post by Michael De Ruyter
change the position of the limb. Therefore you could get drastic
differences
Post by Richard Fine
Post by Michael De Ruyter
when visually there are barely any.
True in the general case, but I'm using a hierarchical setup, so a twist
in an upper arm can drastically change the position/orientation of a
forearm and so on. It matters less and less as you approach the leaf
nodes, but this is something I intended to eliminate with weighting.
Post by Michael De Ruyter
- even if the joint orientation are different the position of the limbs,
especially their endings like hands of feet, could still be in very
close
Post by Richard Fine
Post by Michael De Ruyter
positions, i.e. potentially closer than limbs with similar rotations
but with
Post by Richard Fine
Post by Michael De Ruyter
their root joint (shoulder for instance) off by a bit. You mention
weighting
Post by Richard Fine
Post by Michael De Ruyter
system, but that is going to be a pain to tune.
Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would
be less visually different than one with trunk-node discrepancies, but
that's not a sound assumption.
This sounds like it would be a problem for *any* hierarchy-based
approach, so anything based on comparing local-space positions is
probably a non-starter.
Post by Michael De Ruyter
An other approach would be to find a comparison algorithm that compares
the
Post by Richard Fine
Post by Michael De Ruyter
overall position of the limbs.
For instance you could consider;
- modeling triangles based of significant body joints, for instance
+ hips, shoulder, hand
+ hips, hand, foot
+ hips, shoulder, shoulder
Then use the normal of those triangles for your pose comparison.
You would still need to make the normals relative to the hips and then
use a
Post by Richard Fine
Post by Michael De Ruyter
hips orientation comparison process.
Right OK. Sounds similar to the point cloud approach, maybe a little
less prone to small discrepancies, as using the normal instead of the
joint positions would equate similar triangles.
Cheers!
- Richard
------------------------------------------------------------------------------
Post by Richard Fine
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Chris Green
2012-12-19 19:53:58 UTC
Permalink
I'd think you'd want to compare not just the static configuration of the pose but also the point/joint velocities. Your ideal choice will be one where not only the pose matches well, but the body parts are also moving at similar directions and speeds to the ragdoll's current state.

From: Ben Sunshine-Hill [mailto:***@gmail.com]
Sent: Friday, December 14, 2012 2:35 AM
To: Game Development Algorithms
Subject: Re: [Algorithms] Finding the best pose to re-enter animation graph from ragdoll

Ironically, that paper actually compares joint orientations instead of point clouds. For point cloud-based pose similarity estimation, Kovar's original motion graph paper is probably a good reference.

http://pages.cs.wisc.edu/~kovar/mographs.pdf

Ben

On Thu, Dec 13, 2012 at 9:12 PM, Michael De Ruyter <***@gmail.com<mailto:***@gmail.com>> wrote:
Also with the triangle normal idea, a notion of distance is missing,
so you should add something like distance hips - hand, hips - foot,
foot - foot, etc to your comparison algorithm if u were to try this
method.

Though since there is already a paper on the point cloud approach,
it's probably safer to start with it.

Sent from my iPhone
Post by Richard Fine
- when comparing local quaternions for the joints, the comparison will take into
account the twist around the limbs even though any amount of twist doesn't
change the position of the limb. Therefore you could get drastic differences
when visually there are barely any.
True in the general case, but I'm using a hierarchical setup, so a twist
in an upper arm can drastically change the position/orientation of a
forearm and so on. It matters less and less as you approach the leaf
nodes, but this is something I intended to eliminate with weighting.
- even if the joint orientation are different the position of the limbs,
especially their endings like hands of feet, could still be in very close
positions, i.e. potentially closer than limbs with similar rotations but with
their root joint (shoulder for instance) off by a bit. You mention weighting
system, but that is going to be a pain to tune.
Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would
be less visually different than one with trunk-node discrepancies, but
that's not a sound assumption.
This sounds like it would be a problem for *any* hierarchy-based
approach, so anything based on comparing local-space positions is
probably a non-starter.
An other approach would be to find a comparison algorithm that compares the
overall position of the limbs.
For instance you could consider;
- modeling triangles based of significant body joints, for instance
+ hips, shoulder, hand
+ hips, hand, foot
+ hips, shoulder, shoulder
Then use the normal of those triangles for your pose comparison.
You would still need to make the normals relative to the hips and then use a
hips orientation comparison process.
Right OK. Sounds similar to the point cloud approach, maybe a little
less prone to small discrepancies, as using the normal instead of the
joint positions would equate similar triangles.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
GDAlgorithms-***@lists.sourceforge.net<mailto:GDAlgorithms-***@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
Archives:
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Ben Sunshine-Hill
2012-12-13 19:54:42 UTC
Permalink
Joint angles suck for pose comparisons -- they just aren't the basis of our
intuitive notion of similarity. IMHO, point clouds work much better.
Transform a few bone-attached points -- say, pelvis, left shoulder, right
shoulder, left elbow, right elbow, left knee, right knee -- canonicalize by
putting the pelvis at zero and the shoulder midpoint at +X, and find the
minimum squared distance to a recovery pose (with some per-point weighting,
if you like).

For references, the one that immediately comes to mind is "Dynamic Response
for Motion Capture Animation". They're blending into a response animation
while the character's still fully ragdoll, so they have to look at multiple
frames to get velocity effects in there -- if you're waiting until the
guy's all fallen over, your task will be simpler.

Ben
Post by Richard Fine
Hi all,
I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.
As I see it there are two components to computing a score for each
1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.
2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).
Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?
It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.
This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Richard Fine
2012-12-13 20:16:57 UTC
Permalink
Post by Ben Sunshine-Hill
Joint angles suck for pose comparisons -- they just aren't the basis of our
intuitive notion of similarity. IMHO, point clouds work much better. Transform a
few bone-attached points -- say, pelvis, left shoulder, right shoulder, left
elbow, right elbow, left knee, right knee -- canonicalize by putting the pelvis
at zero and the shoulder midpoint at +X, and find the minimum squared distance
to a recovery pose (with some per-point weighting, if you like).
Hm, right. I think my concern with this approach previously was that
when the character is prone, the reference points would all be pretty
much coplanar, making it hard to tell which way he's facing... but
thinking about it now, that's silly, because his left and right sides
always have to be a particular way around for a given faceup/facedown.
Using a point cloud of just a few reference points is going to be a lot
faster than calculating joint angles across all major skeleton bones,
too. So I'll give this a shot first.
Post by Ben Sunshine-Hill
For references, the one that immediately comes to mind is "Dynamic Response for
Motion Capture Animation". They're blending into a response animation while the
character's still fully ragdoll, so they have to look at multiple frames to get
velocity effects in there -- if you're waiting until the guy's all fallen over,
your task will be simpler.
Cool. Looks like they don't go into much detail in the paper, but it's a
starting point if I want to look for related work.

Thanks!

- Richard
Post by Ben Sunshine-Hill
Hi all,
I've got a ragdolled character that I want to begin animating again.
I've got a number of states in my animation graph marked as 'recovery
points', i.e. animations that a ragdoll can reasonably be blended back
to before having the animation graph take over fully. The problem is,
I'm not sure how to identify which animation's first frame (the
'recovery pose') is closest to the ragdoll's current pose.
As I see it there are two components to computing a score for each
1) For each non-root bone, sum the differences in parent-space rotation
between current and recovery poses. This is simple enough to do; in
addition I think I need to weight the values (e.g. by the physics mass
of the bone), as a pose that is off by 30 degrees in the upper arm
stands to look a lot less similar to the ragdoll's pose than one that is
only off by 30 degrees in the wrist. The result of this step is some
kind of score representing the object-space similarity of the poses.
2) Add to (1) some value representing how similar the root bones are.
The problem I've got here is that I need to ignore rotation around the
global Y axis, while still accounting for other rotations. (I can ignore
position as well, as I can move the character's reference frame to
account for it).
Suppose I have a recovery pose animation that has been authored such
that the character is lying stretched out prone, on his stomach, facing
along +Z. If the ragdoll is also lying stretched out prone on his
stomach, facing -X, then the recovery pose is still fine to use - I just
need to rotate the character's reference frame around the Y axis to
match, so the animation plays back facing the right direction. But, if
the ragdoll is lying on his back, or sitting up, then it's not usable,
regardless of which direction the character's facing in. So, I've got
the world-space rotation of the ragdoll's root bone as a quaternion, and
a quaternion representing the rotation of the corresponding root bone in
the recovery pose in *some* space (I think object-space, but I'm not
sure?) as starting points. What can I compute from them that has this
ignoring-rotation-around-global-Y property?
It's been suggested there there's some canonicalization step I can
perform that would just eliminate any Y-rotation, but I don't know how
to do that other than by decomposing to Euler angles, and I suspect that
would have gimbal lock problems.
This is probably some pretty simple linear algebra at the end of the
day, but between vague memories of eigenvectors, and a general
uncertainty as to whether I'm just overcomplicating this entire thing, I
could use a pointer in the right direction. Any thoughts or references
you could give me would be much appreciated.
Cheers!
- Richard
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
GDAlgorithms-list mailing list
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Loading...