OpenGL models show grid-like lines - objective-c

Shading problem solved; grid lines persist. See Update 5
I'm working on an .obj file loader for OpenGL using Objective-C.
I'm trying to get objects to load and render them with shading. I'm not using any textures, materials, etc. to modify the model besides a single light source. When I render any model, the shading is not distributed properly (as seen in the pictures below). I believe this has something to do with the normals, but I'm not sure.
These are my results:
And this is the type of effect I'm trying to achieve:
I thought the problem was that the normals I parsed from the file were incorrect, but after calculating them myself and getting the same results, I found that this wasn't true. I also thought not having GL_SMOOTH enabled was the issue, but I was wrong there too.
So I have no idea what I'm doing wrong here, so sorry if the question seems vague. If any more info is needed, I'll add it.
Update:
Link to larger picture of broken monkey head: http://oi52.tinypic.com/2re5y69.jpg
Update 2: If there's is a mistake in the process of me calculating normals, this is what I'm doing:
Create a triangle for each group of indices.
Calculate the normals for the triangle and store it in a vector.
Ensure the vector is normalized with the following function
:
static inline void normalizeVector(Vector3f *vector) {
GLfloat vecMag = VectorMagnitude(*vector);
if (vecMag == 0.0) {
vector->x /= 1.0;
vector->y /= 0.0;
vector->z /= 0.0;
}
vector->x /= vecMag;
vector->y /= vecMag;
vector->z /= vecMag;
}
Update 3: Here's the code I'm using to create the normals:
- (void)calculateNormals {
for (int i = 0; i < numOfIndices; i += 3) {
Triangle triangle;
triangle.v1.x = modelData.vertices[modelData.indices[i]*3];
triangle.v1.y = modelData.vertices[modelData.indices[i]*3+1];
triangle.v1.z = modelData.vertices[modelData.indices[i]*3+2];
triangle.v2.x = modelData.vertices[modelData.indices[i+1]*3];
triangle.v2.y = modelData.vertices[modelData.indices[i+1]*3+1];
triangle.v2.z = modelData.vertices[modelData.indices[i+1]*3+2];
triangle.v3.x = modelData.vertices[modelData.indices[i+2]*3];
triangle.v3.y = modelData.vertices[modelData.indices[i+2]*3+1];
triangle.v3.z = modelData.vertices[modelData.indices[i+2]*3+2];
Vector3f normals = calculateNormal(triangle);
normalizeVector(&normals);
modelData.normals[modelData.surfaceNormals[i]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i]*3+2] = normals.z;
modelData.normals[modelData.surfaceNormals[i+1]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i+1]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i+1]*3+2] = normals.z;
modelData.normals[modelData.surfaceNormals[i+2]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i+2]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i+2]*3+2] = normals.z;
}
Update 4: Looking further into this, it seems like the .obj file's normals are surface normals, while I need the vertex normals. (Maybe)
If the vertex normals are what I need, if anybody can explain the theory behind calculating them, that'd be great. I tried looking it up but I only found examples, not a theory. (e.g. "get the cross product of each face and normalize it"). If I know what I have to do, I can look it up an individual process if I get stuck and won't have to keep updating this.
Update 5: I re-wrote my whole loader, and got it to work, somehow. Although it shades properly, I still have those grid-like lines that you can see on my original results.

Your normalizeVector function is clearly wrong. Dividing by zero is never a good idea. Should be working when vecMag != 0.0 though. How are you calculating the normals? Using cross-product? When happens if you let OpenGL calculate the normals?

It may be the direction of your normals that's at fault - if they're the wrong way around then you'll only see the polygons that are supposed to be pointing away from you.
Try calling:
glDisable(GL_CULL_FACE);
and see whether your output changes.
Given that we're seeing polygon edges I'd also check for:
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);

Related

How would one draw an arbitrary curve in createJS

I am attempting to write a function using createJS to draw an arbitrary function and I'm having some trouble. I come from a d3 background so I'm having trouble breaking out of the data-binding mentality.
Suppose I have 2 arrays xData = [-10, -9, ... 10] and yData = Gaussian(xData) which is psuedocode for mapping each element of xData to its value on the bell curve. How can I now draw yData as a function of xData?
Thanks
To graph an arbitrary function in CreateJS, you draw lines connecting all the data points you have. Because, well, that's what graphing is!
The easiest way to do this is a for loop going through each of your data points, and calling a lineTo() for each. Because the canvas drawing API starts a line where you last 'left off', you actually don't even need to specify the line start for each line, but you DO have to move the canvas 'pen' to the first point before you start drawing. Something like:
// first make our shape to draw into.
let graph = new createjs.Shape();
let g = graph.graphics
g.beginStroke("#000");
xStart = xData[0];
yStart = yourFunction(xData[0]);
g.moveTo(xStart, yStart);
for( let i = 1; i < xData.length; i++){
nextX = xData[i], but normalized to fit on your graph area;
nextY = yourFunction(xData[i]), but similarly normalized;
g.lineTo(nextX, nextY);
}
This should get a basic version of the function drawing! Note that the line will be pretty jagged if you don't have a lot of data points, and you'll have to treat (normalize) your data to make it fit onto your screen. For instance, if you start at -10 for X, that's off the screen to the left by 10 pixels - and if it only runs from -10 to +10, your entire graph will be squashed into only 20 pixels of width.
I have a codepen showing this approach to graphing here. It's mapped to hit every pixel on the viewport and calculate a Y value for it, though, rather than your case where you have input X values. And FYI, the code for graphing is all inside the 'run' function at the top - everything in the PerlinNoiseMachine class is all about data generation, so you can ignore it for the purposes of this question.
Hope that helps! If you have any specific follow-up questions or code samples, please amend your question.

Inconsistent Contrast Handling with Psychopy

I can't not find the source of the difference in the handling of contrast for version 1.75.01 and 1.82. Here are two images that show what it used to look like (1.75),
and what it now looks like:
Unfortunately, rolling back is not trivial as I run into problems with dependencies (especially PIL v PILLOW). The images are created from a numpy array, and I suspect there is something related to how the numbers are getting handled (?type, rounding) when the conversion from array to image occurs, but I can't find the bug. Any help will be deeply appreciated.
Edited - New Minimal Example
#! /bin/bash
import numpy as np
from psychopy import visual,core
def makeRow (n,c):
cp = np.tile(c,[n,n,3])
cm = np.tile(-c,[n,n,3])
cpm = np.hstack((cp,cm))
return(cpm)
def makeCB (r1,r2,nr=99):
#nr is repeat number
(x,y,z) = r1.shape
if nr == 99:
nr = x/2
else:
hnr = nr/2
rr = np.vstack((r1,r2))
cb=np.tile(rr,[hnr,hnr/2,1])
return(cb)
def makeTarg(sqsz,targsz,con):
wr = makeRow(sqsz,1)
br = makeRow(sqsz,-1)
cb = makeCB(wr,br,targsz)
t = cb*con
return(t)
def main():
w = visual.Window(size = (400,400),units = "pix", winType = 'pyglet',colorSpace = 'rgb')
fullCon_np = makeTarg(8,8,1.0)
fullCon_i = visual.ImageStim(w, image = fullCon_np,size = fullCon_np.shape[0:2][::-1],pos = (-100,0),colorSpace = 'rgb')
fullCon_ih = visual.ImageStim(w, image = fullCon_np,size = fullCon_np.shape[0:2][::-1],pos = (-100,0),colorSpace = 'rgb')
fullCon_iz = visual.ImageStim(w, image = fullCon_np,size = fullCon_np.shape[0:2][::-1],pos = (-100,0),colorSpace = 'rgb')
fullCon_ih.contrast = 0.5
fullCon_ih.setPos((-100,100))
fullCon_iz.setPos((-100,-100))
fullCon_iz.contrast = 0.1
partCon_np = makeTarg(8,8,0.1)
partCon_i = visual.ImageStim(w, image = partCon_np,pos = (0,0), size = partCon_np.shape[0:2][::-1],colorSpace = 'rgb')
zeroCon_np = makeTarg(8,8,0.0)
zeroCon_i = visual.ImageStim(w, image = zeroCon_np,pos=(100,0), size = zeroCon_np.shape[0:2][::-1],colorSpace = 'rgb')
fullCon_i.draw()
partCon_i.draw()
fullCon_ih.draw()
fullCon_iz.draw()
zeroCon_i.draw()
w.flip()
core.wait(15)
core.quit()
if __name__ == "__main__":
main()
Which yields this:
The three checker-boards along the horizontal have the contrast changed in the array when generated before conversion to the image. The Vertical left shows that changing the image contrast afterwards works fine. The reason I can't use this is that a) I have collected a lot of data with the last version, and b) I want to grade the contrast of those big long bars in the centre programatically by multiplying one array against another, e.g. using a log scale or some other function, and doing the math is easier in numpy.
I still suspect the issue is in the conversion from np.array -> pil.image. The dtype of these array is float64, but even if I coerce to float32 nothing changes. If you examine the array before conversion at half contrast it is filled with 0.5 and -0.5 numbers, but all the negative numbers are getting turned to black and black is being set to zero at the time of conversion by psychopy.tools.imagetools.array2image I think.
OK, yes, the problem was to do with the issue of the scale for the array values. Basically, you've found a corner case that PsychoPy isn't handling correctly (i.e. a bug).
Explanation:
PsychoPy has a complex set of transformation rules for handling image/textures; it tries to deduce what you're going to do with this image and whether it should be stored in a way that supports colour manipulations (signed float) or not (can be an unsigned byte). In your case PsychoPy was getting this wrong; the fact that the array was filled with floats made PsychoPy think it could do color transforms, but the fact that it was NxNx3 suggest it shouldn't (we don't want to specify a "color" for something that already has its color specified for every pixel as rgb vals).
Workarounds (any one of these):
Just provide your array as NxN, not NxNx3. This is the right thing to do anyway; it means less for you to compute/store and by providing "intensity" values these can then be recolored on-the-fly. This is roughly what you had discovered already in providing just one slice of your NxNx3 array, but the point is that you could/should only create one slice in the first place.
Use GratingStim, which converts everything to signed floating point values rather than trying to work out what's best (potentially then you'd need to work out the spatial frequency stuff though)
You could add a line to fix it by rescaling your array (*0.5+0.5) but you'd have to set something so that this only occurred for this version (we'll fix it before the next release)
Basically, I'm suggesting you do (1) because that already works for past, present and future versions and is more efficient anyway. But thanks for letting us know - I'll try to make sure we catch this one in future
best wishes
Jon
The code is too long for me to read through and work out the actual problem.
What might be the problem is the issue of where zero should be. I think for a while numpy arrays were treated as having vals 0:1 whereas the rest of PsychoPy expects values to be -1:1 so it might be that you need to rescale your values with array=array*2-1 to get back to old (bad behaviour). Or check opacity too, which might have a similar issue. If you write a minimal example I'll read/test it properly
Thanks

Constructing a bubble trellis plot with lattice in R

First off, this is a homework question. The problem is ex. 2.6 from pg.26 of An Introduction to Applied Multivariate Analysis. It's laid out as:
Construct a bubble plot of the earthquake data using latitude and longitude as the scatterplot and depth as the circles, with greater depths giving smaller circles. In addition, divide the magnitudes into three equal ranges and label the points in your bubble plot with a different symbol depending on the magnitude group into which the point falls.
I have figured out that symbols, which is in base graphics does not work well with lattice. Also, I haven't figured out if lattice has the functionality to change symbol size (i.e. bubble size). I bought the lattice book in a fit of desperation last night, and as I see in some of the examples, it is possible to symbol color and shape for each "cut" or panel. I am then working under the assumption that symbol size could then also be manipulated, but I haven't been able to figure out how.
My code looks like:
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE)
}
))
Where I attempt to use the grid package to draw the circles, but when this executes, I just get a blank plot. Could anyone please point me in the right direction? I would be very grateful!
Here is the some code for creating the plot that you need without using the lattice package. I obviously had to generate my own fake data so you can disregard all of that stuff and go straight to the plotting commands if you want.
####################################################################
#Pseudo Data
n = 20
latitude = sample(1:100,n)
longitude = sample(1:100,n)
depth = runif(n,0,.5)
magnitude = sample(1:100,n)
groups = rep(NA,n)
for(i in 1:n){
if(magnitude[i] <= 33){
groups[i] = 1
}else if (magnitude[i] > 33 & magnitude[i] <=66){
groups[i] = 2
}else{
groups[i] = 3
}
}
####################################################################
#The actual code for generating the plot
plot(latitude[groups==1],longitude[groups==1],col="blue",pch=19,ylim=c(0,100),xlim=c(0,100),
xlab="Latitude",ylab="Longitude")
points(latitude[groups==2],longitude[groups==2],col="red",pch=15)
points(latitude[groups==3],longitude[groups==3],col="green",pch=17)
points(latitude[groups==1],longitude[groups==1],col="blue",cex=1/depth[groups==1])
points(latitude[groups==2],longitude[groups==2],col="red",cex=1/depth[groups==2])
points(latitude[groups==3],longitude[groups==3],col="green",cex=1/depth[groups==3])
You just need to add default.units = "native" to grid.circle()
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE, default.units = "native")
}
))
Obviously you need to tinker with some of the settings to get what you want.
I have written a package called tactile that adds a function for producing bubbleplots using lattice.
tactile::bubbleplot(depth ~ lat*long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude")

Vector Equation Implementation

After some time searching, I have revised my question.
I have found numerous examples of ball to ball collisions, but the only ones that seem to work use Vector2d or Vector2D.
This is a problem, because I am only allowed to use the regular java library, so my main question is: How do I convert the examples (which I will post below) to use what I can use?
I have several variables, both balls have the same mass, the velocities are broken into different variables, x and y. Also I have access to their x and y pos.
This is the ONLY problem left in my application.
I am at a total loss on how to convert the below example.
// get the mtd
Vector2d delta = (position.subtract(ball.position));
float d = delta.getLength();
// minimum translation distance to push balls apart after intersecting
Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d);
// resolve intersection --
// inverse mass quantities
float im1 = 1 / getMass();
float im2 = 1 / ball.getMass();
// push-pull them apart based off their mass
position = position.add(mtd.multiply(im1 / (im1 + im2)));
ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2)));
// impact speed
Vector2d v = (this.velocity.subtract(ball.velocity));
float vn = v.dot(mtd.normalize());
// sphere intersecting but moving away from each other already
if (vn > 0.0f) return;
// collision impulse
float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2);
Vector2d impulse = mtd.multiply(i);
// change in momentum
this.velocity = this.velocity.add(impulse.multiply(im1));
ball.velocity = ball.velocity.subtract(impulse.multiply(im2));
Here is the URL for the question:
http://stackoverflow.com/questions/345838/ball-to-ball-collision-detection-and-handling
And I have taken a look at his source code.
Thank you for taking the time to read this issue.
SUCCESS!
I have found how to use Vector2d, and it works PERFECTLY!
Will edit later with answer!
I'm implementing my own 3d engine in c# based on a really basic 3d open-source engine in JavaScript called a3. I don't know If I have 100% understand you but It sounds like you can only find examples with Vector2d but you are not allowed to use that class?
I that is the case, as you can imagine javascript does not have native Vector2d types so someone had to implement. Don't be afraid of giving it a try, is just a few high school maths functions, you should be able to implement your own Vector2d class in just a few minutes
The following link contain implementations if vector2d, vector3d, vector4d, matrix3, and matrix4 in javascript: https://github.com/paullewis/a3/tree/master/src/js/core/math hope it helps :)

Applying a vortex / whirlpool effect in Box2d / Cocos2d for iPhone

I've used Nick Vellios' tutorial to create radial gravity with a Box2D object. I am aware of Make a Vortex here on SO, but I couldn't figure out how to implement it in my project.
I have made a vortex object, which is a Box2D circleShape sensor that rotates with a consistent angular velocity. When other Box2D objects contact this vortex object I want them to rotate around at the same angular velocity as the vortex, gradually getting closer to the vortex's centre. At the moment the object is attracted to the vortex's centre but it will head straight for the centre of the vortex, rather than spinning around it slowly like I want it to. It will also travel in the opposite direction than the vortex as well as with the vortex's rotation.
Given a vortex and a box2D body, how can I set the box2d body to rotate with the vortex as it gets 'sucked in'.
I set the rotation of the vortex when I create it like this:
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.angle = 2.0f;
bodyDef.angularVelocity = 2.0f;
Here is how I'm applying the radial gravity, as per Nick Vellios' sample code.
-(void)applyVortexForcesOnSprite:(CCSpriteSubclass*)sprite spriteBody:(b2Body*)spriteBody withVortex:(Vortex*)vortex VortexBody:(b2Body*)vortexBody vortexCircleShape:(b2CircleShape*)vortexCircleShape{
//From RadialGravity.xcodeproj
b2Body* ground = vortexBody;
b2CircleShape* circle = vortexCircleShape;
// Get position of our "Planet" - Nick
b2Vec2 center = ground->GetWorldPoint(circle->m_p);
// Get position of our current body in the iteration - Nick
b2Vec2 position = spriteBody->GetPosition();
// Get the distance between the two objects. - Nick
b2Vec2 d = center - position;
// The further away the objects are, the weaker the gravitational force is - Nick
float force = 1 / d.LengthSquared(); // 150 can be changed to adjust the amount of force - Nick
d.Normalize();
b2Vec2 F = force * d;
// Finally apply a force on the body in the direction of the "Planet" - Nick
spriteBody->ApplyForce(F, position);
//end radialGravity.xcodeproj
}
Update I think iForce2d has given me enough info to get on my way, now it's just tweaking. This is what I'm doing at the moment, in addition to the above code. What is happening is the body gains enough velocity to exit the vortex's gravity well - somewhere I'll need to check that the velocity stays below this figure. I'm a little concerned I'm not taking into account the object's mass at the moment.
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint(spriteBody->GetPosition() );
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, spriteBody->GetLinearVelocity() ) * vortexVelNormal;
//Using a force
b2Vec2 vel = bodyVelocity;
float forceCircleX = .6 * bodyVelocity.x;
float forceCircleY = .6 * bodyVelocity.y;
spriteBody->ApplyForce( b2Vec2(forceCircleX,forceCircleY), spriteBody->GetWorldCenter() );
It sounds like you just need to apply another force according to the direction of the vortex at the current point of the body. You can use b2Body::GetLinearVelocityFromWorldPoint to find the velocity of the vortex at any point in the world. From Box2D source:
/// Get the world linear velocity of a world point attached to this body.
/// #param a point in world coordinates.
/// #return the world velocity of a point.
b2Vec2 GetLinearVelocityFromWorldPoint(const b2Vec2& worldPoint) const;
So that would be:
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint( suckedInBody->GetPosition() );
Once you know the velocity you're aiming for, you can calculate how much force is needed to go from the current velocity, to the desired velocity. This might be helpful: http://www.iforce2d.net/b2dtut/constant-speed
The topic in that link only discusses a 1-dimensional situation. For your case it is also essentially 1-dimensional, if you project the current velocity of the sucked-in body onto the vortexVelocity vector:
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, suckedInBody->GetLinearVelocity() ) * vortexVelNormal;
Now bodyVelocity and vortexVelocity will be in the same direction and you can calculate how much force to apply. However, if you simply apply enough force to match the vortex velocity exactly, the sucked in body will probably go into orbit around the vortex and never actually get sucked in. I think you would want to make the force quite a bit less than that, and I would scale it down according to the gravity strength as well, otherwise the sucked-in body will be flung away sideways as soon as it contacts the outer edge of the vortex. It could take a lot of tweaking to get the effect you want.
EDIT:
The force you apply should be based on the difference between the current velocity (bodyVelocity) and the desired velocity (vortexVelocity), ie. if the body is already moving with the vortex then you don't need to apply any force. Take a look at the last code block in the sub-section titled 'Using forces' in the link I gave above. The last three lines there do pretty much what you need if you replace 'vel' and 'desiredVel' with the sizes of your bodyVelocity and vortexVelocity vectors:
float desiredVel = vortexVelocity.Length();
float currentVel = bodyVelocity.Length();
float velChange = desiredVel - currentVel;
float force = body->GetMass() * velChange / (1/60.0); //for a 1/60 sec timestep
body->ApplyForce( b2Vec2(force,0), body->GetWorldCenter() );
But remember this would probably put the body into orbit, so somewhere along the way you would want to reduce the size of the force you apply, eg. reduce 'desiredVel' by some percentage, reduce 'force' by some percentage etc. It would probably look better if you could also scale the force down so that it was zero at the outer edge of the vortex.
I had a project where I had asteroids swirling around a central point (there are things jumping between them...which is a different point).
They are connected to the "center" body via b2DistanceJoints.
You can control the joint length to make them slowly spiral inward (or outward). This gives you find grain control instead of balancing force control, which may be difficult.
You also apply tangential force to make them circle the center.
By applying different (or randomly changing) tangential forces, you can make the
crash into each other, etc.
I posted a more complete answer to this question here.