How to create water waves with Ogre3d WaterMesh class? - mesh

I would like to use the water and waterMesh classes from the Ogre3D Demos in order to create the water with some waves. For the moment I added the classes in my project and create a waterMesh Object in this way:
WaterMesh *waterMesh;
waterMesh = new WaterMesh("waterMesh", 100.0f, 64);
Great, I have a water surface 100×100. I would like to create some waves now. I can I do it?Is it updateMesh that I should use?

Looking at the source code I think that you can only push a surface at some point, and the WaterMesh will compute resulting waves after some delta time in its updateMesh method. There seems to be no way to create a typical 'ocean' waves, unless you modify the source code.
But if it's all you need, then use the push(Real x, Real y, Real depth, bool absolute) method.

Related

How to Create a List of Vectors Using the Curve Class Traveling from A to B?

I'm trying to create a path for an object to travel by.
I want the path to use a curve that goes upwards from the start(A) to then fall down to the bottom(B).
Like this :
I've looked at different ways to create a curved path and they all either use the Spritebatch to create the curve or don't use the Curve class. I need to create many curves in parallel to be used by many different sprites with each their own curve.
So i looked at the Microsoft.Xna.Framework.Curve class but i can't find any good resource for how to use it correctly.
The Curve class uses CurveKey for its points but it's unclear exactly how you convert a Key to a Vector2D as a key is defined in a Single numeric value and not a XY coordinate.
I want a SortedList of all the points that are needed for a Sprite to travel with the Curve that i can use with the Vector2D class but again, i don't really understand how to use the Curve class, and the documentation is rather confusing.
It talks about that if you want more dimensions you can use multiple Curve objects but there isn't an example on there.
I have not been able to get the CurveEditor to compile either and support for it seems to be long dead.
If anyone could show me how to make a curve with the Curve class and have that translate into a coordinate with Vector2D that would be a great starting point.
The Vector2 class has a static method defined as:
public static Vector2 Hermite(Vector2 value1, Vector2 tangent1, Vector2 value2, Vector2 tangent2, float amount)
This is similar to a lerp, where the amount is a number between 0 and 1.

meshlab- how to transfer uvs from source .objs onto poisson reconstruction model

I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.

Maya Mel project lookAt target into place after motion capture import

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.
I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.
Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).
Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.
After the motion capture data is imported, the eye gazes are now set with motion capture rotations.
I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.
After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.
I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?
How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.
To find the convergence of the two eyes, you try this (like #julian I'm using locators, etc since doing all the math in mel would be irritating).
1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane
2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane
3) the local Y rotation of the second locator is the angle of convergence between the two eyes.
4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:
focal distance eye_locator2.tx
-------------- = ---------------
sin(90 - eye_locator2.ry) sin( eye_locator2.ry)
so algebraically:
focal distance = eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)
You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:
focal distance = (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) - eye_locator2.tz
5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.
I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:
Assume the axis pointing through the pupil is positive X, and focal length is 10 units.
Create 2 locators. Parent one to each eye. Set their translations to
(10, 0, 0).
Create 2 more locators in worldspace. Point constrain them to the others.
Create a plusMinusAverage node.
Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
Create another locator (the lookAt)
Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
Bake the translation of the lookAt locator.
Delete the other 4 locators.
Aim constrain the eyes' X axes to the lookAt.
This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.
The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.

Applying a vortex / whirlpool effect in Box2d / Cocos2d for iPhone

I've used Nick Vellios' tutorial to create radial gravity with a Box2D object. I am aware of Make a Vortex here on SO, but I couldn't figure out how to implement it in my project.
I have made a vortex object, which is a Box2D circleShape sensor that rotates with a consistent angular velocity. When other Box2D objects contact this vortex object I want them to rotate around at the same angular velocity as the vortex, gradually getting closer to the vortex's centre. At the moment the object is attracted to the vortex's centre but it will head straight for the centre of the vortex, rather than spinning around it slowly like I want it to. It will also travel in the opposite direction than the vortex as well as with the vortex's rotation.
Given a vortex and a box2D body, how can I set the box2d body to rotate with the vortex as it gets 'sucked in'.
I set the rotation of the vortex when I create it like this:
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.angle = 2.0f;
bodyDef.angularVelocity = 2.0f;
Here is how I'm applying the radial gravity, as per Nick Vellios' sample code.
-(void)applyVortexForcesOnSprite:(CCSpriteSubclass*)sprite spriteBody:(b2Body*)spriteBody withVortex:(Vortex*)vortex VortexBody:(b2Body*)vortexBody vortexCircleShape:(b2CircleShape*)vortexCircleShape{
//From RadialGravity.xcodeproj
b2Body* ground = vortexBody;
b2CircleShape* circle = vortexCircleShape;
// Get position of our "Planet" - Nick
b2Vec2 center = ground->GetWorldPoint(circle->m_p);
// Get position of our current body in the iteration - Nick
b2Vec2 position = spriteBody->GetPosition();
// Get the distance between the two objects. - Nick
b2Vec2 d = center - position;
// The further away the objects are, the weaker the gravitational force is - Nick
float force = 1 / d.LengthSquared(); // 150 can be changed to adjust the amount of force - Nick
d.Normalize();
b2Vec2 F = force * d;
// Finally apply a force on the body in the direction of the "Planet" - Nick
spriteBody->ApplyForce(F, position);
//end radialGravity.xcodeproj
}
Update I think iForce2d has given me enough info to get on my way, now it's just tweaking. This is what I'm doing at the moment, in addition to the above code. What is happening is the body gains enough velocity to exit the vortex's gravity well - somewhere I'll need to check that the velocity stays below this figure. I'm a little concerned I'm not taking into account the object's mass at the moment.
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint(spriteBody->GetPosition() );
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, spriteBody->GetLinearVelocity() ) * vortexVelNormal;
//Using a force
b2Vec2 vel = bodyVelocity;
float forceCircleX = .6 * bodyVelocity.x;
float forceCircleY = .6 * bodyVelocity.y;
spriteBody->ApplyForce( b2Vec2(forceCircleX,forceCircleY), spriteBody->GetWorldCenter() );
It sounds like you just need to apply another force according to the direction of the vortex at the current point of the body. You can use b2Body::GetLinearVelocityFromWorldPoint to find the velocity of the vortex at any point in the world. From Box2D source:
/// Get the world linear velocity of a world point attached to this body.
/// #param a point in world coordinates.
/// #return the world velocity of a point.
b2Vec2 GetLinearVelocityFromWorldPoint(const b2Vec2& worldPoint) const;
So that would be:
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint( suckedInBody->GetPosition() );
Once you know the velocity you're aiming for, you can calculate how much force is needed to go from the current velocity, to the desired velocity. This might be helpful: http://www.iforce2d.net/b2dtut/constant-speed
The topic in that link only discusses a 1-dimensional situation. For your case it is also essentially 1-dimensional, if you project the current velocity of the sucked-in body onto the vortexVelocity vector:
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, suckedInBody->GetLinearVelocity() ) * vortexVelNormal;
Now bodyVelocity and vortexVelocity will be in the same direction and you can calculate how much force to apply. However, if you simply apply enough force to match the vortex velocity exactly, the sucked in body will probably go into orbit around the vortex and never actually get sucked in. I think you would want to make the force quite a bit less than that, and I would scale it down according to the gravity strength as well, otherwise the sucked-in body will be flung away sideways as soon as it contacts the outer edge of the vortex. It could take a lot of tweaking to get the effect you want.
EDIT:
The force you apply should be based on the difference between the current velocity (bodyVelocity) and the desired velocity (vortexVelocity), ie. if the body is already moving with the vortex then you don't need to apply any force. Take a look at the last code block in the sub-section titled 'Using forces' in the link I gave above. The last three lines there do pretty much what you need if you replace 'vel' and 'desiredVel' with the sizes of your bodyVelocity and vortexVelocity vectors:
float desiredVel = vortexVelocity.Length();
float currentVel = bodyVelocity.Length();
float velChange = desiredVel - currentVel;
float force = body->GetMass() * velChange / (1/60.0); //for a 1/60 sec timestep
body->ApplyForce( b2Vec2(force,0), body->GetWorldCenter() );
But remember this would probably put the body into orbit, so somewhere along the way you would want to reduce the size of the force you apply, eg. reduce 'desiredVel' by some percentage, reduce 'force' by some percentage etc. It would probably look better if you could also scale the force down so that it was zero at the outer edge of the vortex.
I had a project where I had asteroids swirling around a central point (there are things jumping between them...which is a different point).
They are connected to the "center" body via b2DistanceJoints.
You can control the joint length to make them slowly spiral inward (or outward). This gives you find grain control instead of balancing force control, which may be difficult.
You also apply tangential force to make them circle the center.
By applying different (or randomly changing) tangential forces, you can make the
crash into each other, etc.
I posted a more complete answer to this question here.

OpenGL Diffuse Lighting Shader Bug?

The Orange book, section 16.2, lists implementing diffuse lighting as:
void main()
{
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec4 V = gl_ModelViewMatrix * gl_vertex;
vec3 L = normalize(lightPos - V.xyz);
gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L));
}
However, when I run this, the lighting changes when I move my camera.
On the other hand, when I change
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
to
vec3 N = normalize(gl_Normal);
I get diffuse lighting that works like the fixed pipeline.
What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?
[For completeness, the fragment shader just copies the color]
OK, I hope there's nothing wrong with answering your question after over half a year? :)
So there are two things to discuss here:
a) What should the shader look like
You SHOULD transform your normals by the modelview matrix - that's a given. Consider what would happen if you don't - your modelview matrix can contain some kind of rotation. Your cube would be rotated, but the normals would still point in the old direction! This is clearly wrong.
So: When you transform your vertices by modelview matrix, you should also transform the normals. Your normals are vec3 not vec4, and you're not interested in translations (normals only contain direction), so you can just multiply your normal by mat3(gl_ModelViewMatrix), which is the upper-left 3-3 submatrix.
Then: This is ALMOST correct, but still a bit wrong - the reasons are well-described on Lighthouse 3D - go have a read. Long story short, instead of mat3(gl_ModelViewMatrix), you have to multiply by an inverse transpose of that.
And OpenGL 2 is very helpful and precalculates this for you as gl_NormalMatrix. Hence, the correct code is:
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
b) But it's different from fixed pipeline, why?
The first thing which comes to my mind is that "something's wrong with your usage of fixed pipeline".
I'm not really keen on FP (long live shaders!), but as far as I can remember, when you specify your lights via glLightParameterfv(GL_LIGHT_POSITION, something), this was affected by the modelview matrix. It was easy (at least for me :)) to make a mistake of specifying the light position (or light direction for directional lights) in the wrong coordinate system.
I'm not sure if I remember correctly how that worked back then since I use GL3 and shaders nowadays, but let me try... what was your state of modelview matrix? I think it just might be possible that you have specified the directional light direction in object space instead of eye space, so that your light would rotate together with your object. IDK if that's relevant here, but make sure to pay attention to that when using FF. That's a mistake I remember myself doing often when I was still using GL 1.1.
Depending on the modelview state, you could specify the light in:
eye (camera) space,
world space,
object space.
Make sure which one it is.
Huh.. I hope that makes the topic more clear for you. The conclusions are:
always transform your normals along with your vertices in your vertex shaders, and
if it looks different from what you expect, think how you specify your light positions. (Maybe you want to multiply the light postion vector in a shader too? The remarks about light position coordinate systems still hold)