I wish to utilize bullet-physics or similar physical-engine to create a realistic skeleton simulation of human-like body with two legs. That is, create a simulation of "a body" made of round mass on top of two "legs", where each leg is made of 3 solid pieces connected through 3 joints and each joint have some degrees of freedom and a limited movement range in each direction, similar to human hip, knee and ankle.
I aim for a realistic model, and hence it will 'stand' only if all joints are balanced correctly and it will fall otherwise.
Any directions, suggestion or pointers to existing tutorials or resources is appreciated! This looks like an awful lot of work doing from scratch...
Check out the OpenSim project (opensim.stanford.edu), which is heavily used by the biomechanics community to study human movement for, e.g., clinical applications. In OpenSim, the human models are often actuated by muscles, not simply by torques at the joints.
OpenSim is built on top of the Simbody multibody dynamics library. See this example from Simbody to hit the ground running with a model of a humanoid: https://github.com/simbody/simbody/blob/master/Simbody/examples/JaredsDude.cpp.
I'm working on similar code at the moment. My approach is use the Bullet Physics Rag Doll Demo as a starting point. It has a rag doll with body parts connected by joints.
I'm then using the Bullet Physics Dynamic Control Demo to learn to bend the joints. The challenging part at the moment is setting all the parameters.
I suggest you learn how to create two rigid bodies connected by a constraint and then to activate the constraint motor to bend the joint.
The following is some code that I'm working with to learn how rigid bodies and constraints work in Bullet Physics. The code creates two blocks connected by a hinge constraint. The update function bends the hinge constraint slowly over time.
Now that I've got this I'll be going back to the Rag Doll and adjusting the joints.
class Simple
{
private:
btScalar targetAngle;
btCollisionShape* alphaCollisionShape;
btCollisionShape* bravoCollisionShape;
btRigidBody* alphaRigidBody;
btRigidBody* bravoRigidBody;
btHingeConstraint* hingeConstraint;
btDynamicsWorld* dynamicsWorld;
public:
~Simple( void )
{
}
btRigidBody* createRigidBody( btCollisionShape* collisionShape,
btScalar mass,
const btTransform& transform ) const
{
// calculate inertia
btVector3 localInertia( 0.0f, 0.0f, 0.0f );
collisionShape->calculateLocalInertia( mass, localInertia );
// create motion state
btDefaultMotionState* defaultMotionState
= new btDefaultMotionState( transform );
// create rigid body
btRigidBody::btRigidBodyConstructionInfo rigidBodyConstructionInfo(
mass, defaultMotionState, collisionShape, localInertia );
btRigidBody* rigidBody = new btRigidBody( rigidBodyConstructionInfo );
return rigidBody;
}
void Init( btDynamicsWorld* dynamicsWorld )
{
this->targetAngle = 0.0f;
this->dynamicsWorld = dynamicsWorld;
// create collision shapes
const btVector3 alphaBoxHalfExtents( 0.5f, 0.5f, 0.5f );
alphaCollisionShape = new btBoxShape( alphaBoxHalfExtents );
//
const btVector3 bravoBoxHalfExtents( 0.5f, 0.5f, 0.5f );
bravoCollisionShape = new btBoxShape( bravoBoxHalfExtents );
// create alpha rigid body
const btScalar alphaMass = 10.0f;
btTransform alphaTransform;
alphaTransform.setIdentity();
const btVector3 alphaOrigin( 54.0f, 0.5f, 50.0f );
alphaTransform.setOrigin( alphaOrigin );
alphaRigidBody = createRigidBody( alphaCollisionShape, alphaMass, alphaTransform );
dynamicsWorld->addRigidBody( alphaRigidBody );
// create bravo rigid body
const btScalar bravoMass = 1.0f;
btTransform bravoTransform;
bravoTransform.setIdentity();
const btVector3 bravoOrigin( 56.0f, 0.5f, 50.0f );
bravoTransform.setOrigin( bravoOrigin );
bravoRigidBody = createRigidBody( bravoCollisionShape, bravoMass, bravoTransform );
dynamicsWorld->addRigidBody( bravoRigidBody );
// create a constraint
const btVector3 pivotInA( 1.0f, 0.0f, 0.0f );
const btVector3 pivotInB( -1.0f, 0.0f, 0.0f );
btVector3 axisInA( 0.0f, 1.0f, 0.0f );
btVector3 axisInB( 0.0f, 1.0f, 0.0f );
bool useReferenceFrameA = false;
hingeConstraint = new btHingeConstraint(
*alphaRigidBody,
*bravoRigidBody,
pivotInA,
pivotInB,
axisInA,
axisInB,
useReferenceFrameA );
// set constraint limit
const btScalar low = -M_PI;
const btScalar high = M_PI;
hingeConstraint->setLimit( low, high );
// add constraint to the world
const bool isDisableCollisionsBetweenLinkedBodies = false;
dynamicsWorld->addConstraint( hingeConstraint,
isDisableCollisionsBetweenLinkedBodies );
}
void Update( float deltaTime )
{
alphaRigidBody->activate();
bravoRigidBody->activate();
bool isEnableMotor = true;
btScalar maxMotorImpulse = 1.0f; // 1.0f / 8.0f is about the minimum
hingeConstraint->enableMotor( isEnableMotor );
hingeConstraint->setMaxMotorImpulse( maxMotorImpulse );
targetAngle += 0.1f * deltaTime;
hingeConstraint->setMotorTarget( targetAngle, deltaTime );
}
};
Just found "Computer Simulation Study of Human Locomotion with a Three-Dimensional Entire-Body Neuro-Musculo-Skeletal Model" by Hase & Yamazaki (2002), which provide
A model having a three-dimensional entire-body structure and consisting of both the neuronal system and the musculo-skeletal system was proposed to precisely simulate human walking motion. The dynamics of the human body was represented by a 14-rigid-link system and 60 muscular models
Doing from scratch will be hard, no doubt about it. I don't know what your goal is. But you don't have to redesign the wheel. Have you taken a look at the Unreal Engine? You might find something to reuse there. Many games are built on the top of this engine.
Related
I got a simd_float4*4 matrix from ARKit. I want to check the values of the matrix but found myself do not know how to do it in Objective-C. In Swift, this can be written as matrix.columns.3 to fetch a vector of values. But I do not know how to do it in Objective-C. Could someone point me a direction please. Thanks!
simd_float4x4 is a struct (like 4 simd_float4), and you can use
simd_float4x4.columns[index]
to access column in matrix.
/*! #abstract A matrix with 4 rows and 4 columns.*/
struct simd_float4x4 {
public var columns: (simd_float4, simd_float4, simd_float4, simd_float4)
public init()
public init(columns: (simd_float4, simd_float4, simd_float4, simd_float4))
}
Apple document link: https://developer.apple.com/documentation/simd/simd_float4x4?language=objc
hope helpful!
here's an example -- from this project, a pretty nice reference for Obj-C ARKit adventures -- https://github.com/markdaws/arkit-by-example/blob/part3/arkit-by-example/ViewController.m
- (void)insertGeometry:(ARHitTestResult *)hitResult {
// Right now we just insert a simple cube, later we will improve these to be more
// interesting and have better texture and shading
float dimension = 0.1;
SCNBox *cube = [SCNBox boxWithWidth:dimension height:dimension length:dimension chamferRadius:0];
SCNNode *node = [SCNNode nodeWithGeometry:cube];
// The physicsBody tells SceneKit this geometry should be manipulated by the physics engine
node.physicsBody = [SCNPhysicsBody bodyWithType:SCNPhysicsBodyTypeDynamic shape:nil];
node.physicsBody.mass = 2.0;
node.physicsBody.categoryBitMask = CollisionCategoryCube;
// We insert the geometry slightly above the point the user tapped, so that it drops onto the plane
// using the physics engine
float insertionYOffset = 0.5;
node.position = SCNVector3Make(
hitResult.worldTransform.columns[3].x,
hitResult.worldTransform.columns[3].y + insertionYOffset,
hitResult.worldTransform.columns[3].z
);
[self.sceneView.scene.rootNode addChildNode:node];
[self.boxes addObject:node];
}
I’m new using OSG and I’m having some issues trying to solve a problem.
I’ve created a scene (a quad and two spheres, with a fixed background) and I’m trying to occlude one of the spheres with a transparent quad. I mean, to make a "Cloak of Invisibility" so I can see the background image through it, but not the sphere (or whatever in the projective line) that it’s behind it.
I'm totally stuck as all the tests I've been doing haven't got me even close to what I want. I’d be really grateful if you could help me with this, any idea (or code!) would be more than welcome!! =)
I attach the code of the simple scene that I’m using to make the tests.
#include <osg/Geometry>
#include <osg/Geode>
#include <osg/Depth>
#include <osg/Texture2D>
#include <osgDB/ReadFile>
#include <osgViewer/Viewer>
#include <osg/ShapeDrawable>
#include <osg/BlendFunc>
#include <osgGA/TrackBallManipulator>
int main ()
{
osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
osg::ref_ptr<osg::Image> image = osgDB::readImageFile( "C:/OpenSceneGraph-3.4.0/esc.jpg" );
texture->setDataVariance(osg::Object::DYNAMIC);
texture->setResizeNonPowerOfTwoHint(false);
texture->setImage( image.get() );
// Background
osg::ref_ptr<osg::Drawable> quad = osg::createTexturedQuadGeometry(osg::Vec3(),osg::Vec3(1.0f,0.0f,0.0f),osg::Vec3(0.0f, 1.0f, 0.0f));
quad->getOrCreateStateSet()->setTextureAttributeAndModes( 0, texture.get() );
osg::ref_ptr<osg::Geode> geodeBack = new osg::Geode;
geodeBack->addDrawable( quad.get() );
osg::ref_ptr<osg::Camera> camera = new osg::Camera;
camera->setCullingActive( false );
camera->setClearMask( 0 );
camera->setAllowEventFocus( false );
camera->setReferenceFrame( osg::Transform::ABSOLUTE_RF );
camera->setRenderOrder( osg::Camera::POST_RENDER );
camera->setProjectionMatrix( osg::Matrix::ortho2D( 0.0, 1.0, 0.0, 1.0) );
camera->addChild( geodeBack.get() );
osg::StateSet* ss = camera->getOrCreateStateSet();
ss->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
ss->setMode(GL_DEPTH, osg::StateAttribute::ON);
ss->setAttributeAndModes( new osg::Depth( osg::Depth::LEQUAL,0.99,1.0 ) );
// Quad geometry
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array;
vertices->push_back( osg::Vec3(-1.0f, 1.0f,-1.0f) );
vertices->push_back( osg::Vec3( 0.0f, 1.0f,-1.0f) );
vertices->push_back( osg::Vec3( 0.0f, 1.0f, 0.0f) );
vertices->push_back( osg::Vec3(-1.0f, 1.0f, 0.0f) );
osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array;
normals->push_back( osg::Vec3(0.0f,-1.0f, 0.0f) );
osg::ref_ptr<osg::Vec4Array> colors = new osg::Vec4Array;
colors->push_back( osg::Vec4(1.0f, 1.0f, 1.0f, 0.0f) );
osg::ref_ptr<osg::Geometry> quad2 = new osg::Geometry;
quad2->setVertexArray( vertices.get() );
quad2->setNormalArray( normals.get() );
quad2->setNormalBinding( osg::Geometry::BIND_OVERALL );
quad2->setColorArray( colors.get() );
quad2->setColorBinding( osg::Geometry::BIND_OVERALL );
quad2->addPrimitiveSet( new osg::DrawArrays(GL_QUADS, 0, 4) );
osg::ref_ptr<osg::Geode> geodeTransp = new osg::Geode;
geodeTransp->addDrawable( quad2.get() );
osg::ref_ptr<osg::BlendFunc> blendFunc = new osg::BlendFunc;
blendFunc->setFunction( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
osg::StateSet* stateset = geodeTransp->getOrCreateStateSet();
stateset->setAttributeAndModes( blendFunc );
// I'd see both spheres if I uncomment this. Otherwise I'd see the blue osg default background
//stateset->setRenderingHint( osg::StateSet::TRANSPARENT_BIN );
// Sphere 1
osg::ref_ptr<osg::ShapeDrawable> shape1 = new osg::ShapeDrawable;
shape1->setShape( new osg::Sphere(osg::Vec3(-0.3f,0.5f,-0.3f), 0.2f) );
shape1->setColor( osg::Vec4(0.0f, 0.0f, 1.0f, 1.0f) );
// Sphere 2
osg::ref_ptr<osg::ShapeDrawable> shape2 = new osg::ShapeDrawable;
shape2->setShape( new osg::Sphere(osg::Vec3(-0.7f,1.5f,-0.7f), 0.2f) );
shape2->setColor( osg::Vec4(0.0f, 1.0f, 0.0f, 1.0f) );
osg::ref_ptr<osg::Geode> geodeFront = new osg::Geode;
geodeFront->addDrawable( shape1.get() );
geodeFront->addDrawable( shape2.get() );
osg::ref_ptr<osg::Group> root = new osg::Group;
root->addChild( camera.get() );
root->addChild( geodeTransp.get() );
root->addChild( geodeFront.get() );
osgViewer::Viewer viewer;
viewer.setSceneData( root.get() );
viewer.setCameraManipulator(new osgGA::TrackballManipulator());
osg::Vec3d eye( -0.5, -3.0, -0.5 );
osg::Vec3d center( -0.5, 0.0, -0.5 );
osg::Vec3d up( 0.0, 0.0, 1.0 );
viewer.getCamera()->setViewMatrixAsLookAt( eye, center, up );
viewer.realize();
while(!viewer.done()) {
viewer.frame();
}
return 0;
}
Thanks in advance,
Cheers,
Alvaro
If I correctly understood what you're after, you should simply render in this specific order:
the background plane
the "invisibility" quad
all the other object (i.e. the two spheres, etc...)
So that everything rendered after the quad, will be masked by it if happens to be behind it. In OSG you can achieve that by assigning explicitly the renderbin to use for the different (groups of) nodes in their stateset:
osg::StateSet* stateset = backgroundNode->getOrCreateStateSet();
stateset->setRenderBinDetails(1, "RenderBin");
// and increasing the number for the other nodes...
EDIT: Solved the issue, see my own answer
Recently I've been working on a 3D world editor that will be using picking to raise or lower terrain. I'm using camera unprojection and ray casting to find the world position of the mouse screen coordinates.
However, it seems like the ray is on the wrong axis. As I remember, the ray that is supposed to come out of unproject should be coming straight from the camera.
Here's an example of what it currently looks like.
My question is, why is the ray over the Y-axis when it's supposed to be over the Z-axis?
XMFLOAT3 D3D11Camera::Unproject(const float& px, const float& py, const float& pz)
{
const XMFLOAT2& res = D3D11RenderSettings::Instance()->resolution();
XMVECTOR coords = XMVector3Unproject(XMVectorSet(px, res.y - py, pz, 0.0f), 0.0f, 0.0f, res.x, res.y, near_plane_, far_plane_, projection_, view_, XMMatrixIdentity());
XMFLOAT3 to_ret;
XMStoreFloat3(&to_ret, coords);
return to_ret;
}
This is the unprojection code..
And this is how I'm using it
projectRay: function()
{
var p = Mouse.position(MousePosition.Relative);
p.x = (p.x + RenderSettings.resolution().w / 2);
p.y = (p.y + RenderSettings.resolution().h / 2);
var unprojA = this._camera.unproject(p.x, p.y, 0);
var unprojB = this._camera.unproject(p.x, p.y, 1);
var dir = Vector3D.normalise(Vector3D.sub(unprojB, unprojA));
var ray = Ray.construct(unprojA, dir);
var p1 = ray.origin;
var p2 = Vector3D.add(ray.origin, Vector3D.mul(ray.direction, 1000));
RenderTargets.ui.drawLine(p1.x, p1.y, p1.z, 1, 0, 0, p2.x, p2.y, p2.z, 1, 0, 0);
return ray;
}
Cheers!
Stupidity aside, I'm working with both an inverted Y-axis coordinate system and a default -- non inverted -- one. It seemed like it was a ray oriented vertically, but in fact it was a ray that was oriented in the direction of '/' while it was supposed to be oriented like '\'. Multiplying the Y component of the result by -1 solved the issue.
I am newbie in programming so I need help with my Ultrasonic sensor driven NXT robot.
It is attached to motor(A) and I'd like it to scan the room from robot's centerline to 90° left and 90° right in 30° increments (seven measurements total), store the data to an array and based on largest distance point my robot in the direction that measurement was taken to avoid obsticles.
Is this possible at all? Or is there some better solution?
Any advice or suggestion is more than welcome.
This would work somewhat for avoiding obstacles, as for implementing it, it depends what you are programming the robot in. I only know lejos(java), in which a function for getting the angle to go would be something like:
public static int scanArea(RegulatedMotor motorTop, UltrasonicSensor sonar) {
int theAngle = 0;
int largestDist = 0;
int currentDist;
for (int rotateAngle = -90; rotateAngle <= 90; rotateAngle += 30) {
motorTop.rotateTo(rotateAngle);
currentDist = sonar.getDistance();
if (currentDist > largestDist) {
largestDist = currentDist;
theAngle = rotateAngle;
}
Delay.msDelay(25);
}
motorTop.rotateTo(0);
return theAngle;
}
If you're coding the robot in any code based language that should be fairly easy to convert (assuming you have functions such as rotateTo, otherwise you would have to use relative movements). Otherwise I don't know how easy this would be to do in the graphical programming language that you use originally.
I would suggest attaching the ultrasonic sensor to a 180° servo that is vertical. You can take a measurement at a specific degree by assigning that degree to the servo. Using this:
int largestDistance = 0;
int Angle =0;
for (int servovalue = 0; servovalue <= 255; rotateAngle += 30){
Servo[servo1] = servovalue;
if (SensorValue[Sonar] > largestDist) {
largestDist = SensorValue[Sonar];
Angle = servovalue;
}
}
return Angle;
}
It assumes that servo1 is your vertical servo, but this should work if your programming in RobotC.
I am trying to rotate the camera around to X-axis of the scene.
At this point my code is like this:
rotation += 0.05;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
This makes the camera move around but during the rotation something weird happens and either the camera flips, or it skips some part of the imaginary circle it's following.
You have only provided a snippet of code, so I have to make some assumptions about what you are doing.
This code:
rotation += 0.05;
camera.position.x = 0;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
camera.lookAt( scene.position ); // the origin
will cause the "flipping" you refer to because the camera is trying to remain "right side up", and it will quickly change orientation as it passes over the "north pole."
If you offset the camera's x-coordinate like so,
camera.position.x = 200;
the camera behavior will appear more natural to you.
Three.js tries to keep the camera facing up. When you pass 0 along the z-axis, it'll "fix" the camera's rotation. You can just check and reset the camera's angle manually.
camera.lookAt( scene.position ); // the origin
if (camera.position.z < 0) {
camera.rotation.z = 0;
}
I'm sure this is not the best solution, but if anyone else runs across this question while playing with three.js (like I just did), it'll give one step further.
This works for me, I hope it helps.
Rotating around X-Axis:
var x_axis = new THREE.Vector3( 1, 0, 0 );
var quaternion = new THREE.Quaternion;
camera.position.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
camera.up.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
Rotating around Y-Axis:
var y_axis = new THREE.Vector3( 0, 1, 0 );
camera.position.applyQuaternion(quaternion.setFromAxisAngle(y_axis, angle));
Rotating around Z-Axis:
var z_axis = new THREE.Vector3( 0, 0, 1 );
camera.up.applyQuaternion(quaternion.setFromAxisAngle(z_axis, angle));
I wanted to move my camera to a new location while having the camera look at a particular object, and this is what I came up with [make sure to load tween.js]:
/**
* Helper to move camera
* #param loc Vec3 - where to move the camera; has x, y, z attrs
* #param lookAt Vec3 - where the camera should look; has x, y, z attrs
* #param duration int - duration of transition in ms
**/
function flyTo(loc, lookAt, duration) {
// Use initial camera quaternion as the slerp starting point
var startQuaternion = camera.quaternion.clone();
// Use dummy camera focused on target as the slerp ending point
var dummyCamera = camera.clone();
dummyCamera.position.set(loc.x, loc.y, loc.z);
// set the dummy camera quaternion
var rotObjectMatrix = new THREE.Matrix4();
rotObjectMatrix.makeRotationFromQuaternion(startQuaternion);
dummyCamera.quaternion.setFromRotationMatrix(rotObjectMatrix);
dummyCamera.up.set(camera)
console.log(camera.quaternion, dummyCamera.quaternion);
// create dummy controls to avoid mutating main controls
var dummyControls = new THREE.TrackballControls(dummyCamera);
dummyControls.target.set(loc.x, loc.y, loc.z);
dummyControls.update();
// Animate between the start and end quaternions
new TWEEN.Tween(camera.position)
.to(loc, duration)
.onUpdate(function(timestamp) {
// Slerp the camera quaternion for smooth transition.
// `timestamp` is the eased time value from the tween.
THREE.Quaternion.slerp(startQuaternion, dummyCamera.quaternion, camera.quaternion, timestamp);
camera.lookAt(lookAt);
})
.onComplete(function() {
controls.target = new THREE.Vector3(scene.children[1].position-0.001);
camera.lookAt(lookAt);
}).start();
}
Example usage:
var pos = {
x: -4.3,
y: 1.7,
z: 7.3,
};
var lookAt = scene.children[1].position;
flyTo(pos, lookAt, 60000);
Then in your update()/render() function, call TWEEN.update();
Full example