I am trying to render a globe (sphere with maps on it) with OpenGL ES 1.1 on iOS.
I am able to draw the sphere, and map borders but with one problem: lines that are not facing front in my view, are also being drawn on the screen. Like this:
In the picture, you can see that America renders just fine, but you can see Australia rendered on the back. It is not supposed to be shown because it's in the back of the globe, and BLACK and PURPLE stripes in the globe are not transparent.
Any ideas on what parameters should I be tweaking in order to get a proper globe?
If it helps, I can post the relevant parts of the code. Just ask which part and I will update the question.
Thanks a lot in advance.
Update: This is what I am using for Sphere rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glPolygonOffset(-1.0f, -1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
int x, y;
GLfloat curR, curG, curB;
curR = curG = curB = 0.15f;
for (y=0; y<EARTH_LAT_RES; y++) {
if (y%10 == 0) {
glColor4f(curR, curG, curB, 1.0f);
curR = curR == 0.15f ? 0.6f : 0.15f;
curB = curB == 0.15f ? 0.6f : 0.15f;
}
for (x=0; x<EARTH_LON_RES; x++) {
Vertex3D vs[4];
vs[1] = vertices[x][y];
vs[0] = vertices[x][y+1];
vs[3] = vertices[x+1][y];
vs[2] = vertices[x+1][y+1];
glVertexPointer(3, GL_FLOAT, 0, vs);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
}
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDisable(GL_POLYGON_OFFSET_FILL);
glDisableClientState(GL_VERTEX_ARRAY);
This is what I am using to render the border lines:
// vxp is a data structure with vertex arrays that represent
// border lines
int i;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
for (i=0; i<vxp->nFeatures; i++)
{
glVertexPointer(3, GL_FLOAT, 0, vxp->pFeatures[i].pVerts);
glDrawArrays(GL_LINE_STRIP, 0, vxp->pFeatures[i].nVerts);
}
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
These are the settings I am using before rendering any of the objects:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glEnable(GL_DEPTH_TEST); /* enable depth testing; required for z-buffer */
glEnable(GL_CULL_FACE); /* enable polygon face culling */
glCullFace(GL_BACK);
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glFrustumf (-1.0, 1.0, -1.5, 1.5, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The obvious way, if it doesn't obstruct the rest of your code, is to draw the sphere as a solid object in an invisible way to prime the depth buffer, then let the depth test figure out which of the lines is visible. You can use glPolygonOffset to add an implementation-specific 'small amount' to values that are used for depth calculations, so you can avoid depth-buffer fighting. So it'd be something like:
// add a small bit of offset, so that lines that should be visible aren't
// clipped due to depth rounding errors; note that ES allows GL_POLYGON_OFFSET_FILL
// but not GL_POLYGON_OFFSET_LINE, so we're going to push the polygons back a bit
// in terms of values written to the depth buffer, rather than the more normal
// approach of pulling the lines forward a bit
glPolygonOffset(-1.0, -1.0);
glEnable(GL_POLYGON_OFFSET_FILL);
// disable writes to the colour buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
drawSolidPolygonalSphere();
// enable writing again
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// disable the offset
glDisable(GL_POLYGON_OFFSET_FILL);
drawVectorMap();
So that'll leave values in your depth buffer as though the globe were solid. If that's not acceptable, then the only alternative I can think of is to do visibility calculations on the CPU. You can use glGet to get the current view matrix, determine the normal at each vertex directly from the way you map them to the sphere (it'll just be their location relative to the centre), then draw any line for which at least one vertex returns a negative value for the dot product of the vector from the camera to the point and the normal.
Related
I am writing a metal shader.
All I want is to access the current color of the fragment.
For example.
At the end of my fragment shader, when I put
return currentColor + 0.1, for example
The result will be screen going from black to white at FPS.
This is a basic program that draws a triangle strip that fills the screen.
Ultimate aim is writing a path tracer inside the shader, I have done this with opengl + glsl.
I am having trouble with the buffers. I thought an easy solution is to just pass the current output color back to the shader and average it in there.
These are the shaders:
#include <metal_stdlib>
using namespace metal;
#import "AAPLShaderTypes.h"
vertex float4 vertexShader(uint vertexID [[vertex_id]], constant AAPLVertex *vertices [[buffer(AAPLVertexInputIndexVertices)]], constant vector_uint2 *viewportSizePointer [[buffer(AAPLVertexInputIndexViewportSize)]], constant vector_float4 * seeds) {
float4 clipSpacePosition = vector_float4(0.0, 0.0, 0.0, 1.0);
float2 pixelSpacePosition = vertices[vertexID].position.xy;
vector_float2 viewportSize = vector_float2(*viewportSizePointer);
clipSpacePosition.xy = pixelSpacePosition / (viewportSize / 2.0);
return clipSpacePosition;
}
fragment float4 fragmentShader()
{
// I want to do this
// return currentColor + 0.1;
return float4(1.0, 1.0, 1.0, 1.0);
}
No worries, the answer was staring me in the face the whole time.
Pass in the current color into the fragment shader with float4 color0 [[color(0)]]
With these 2 options set
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionLoad;
My problem was not the color, it was in calculating the average.
In my case I could not do regular average, as in add up all the colors so far and divide by the number of samples.
What I actually needed to do was calculate a Moving average.
https://en.wikipedia.org/wiki/Moving_average#Cumulative_moving_average
I have a terrain in OpenGL. I want to dynamicly change the space between points.
But when the vertex data is send to the vertex buffer object, i cannot modify anything.
The only thing i can do is delete the VBO and create a replacement VBO with new positions of each point.
Is there a best way to do this ?
As mentioned in the comments, it sounds like you want glBufferSubData.
If you planned to modify the data often, first setup your VBO's initial state:
float[] positions = { 0, 0, 0, 0, 0, 0 };
int numberOfPositions = 6;
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * numberOfPositions, positions, GL_DYNAMIC_DRAW);
Then later say you want to change the last two values to 1, you would do this:
float[] update = { 1, 1 };
int offset = 4
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float) * offset, update);
Check out the docs.gl page on glBufferSubData for more information.
I'm start learning OpenGL and find a problem with my texture. I have a clear texture in png format, which I set on the quad. After testing I found some strange lines.
How I can remove that lines?
Image with bug
Draw scene:
GL.BindTexture(TextureTarget.Texture2D, TextureId);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.Enable(EnableCap.Blend);
GL.Begin(BeginMode.Quads);
GL.TexCoord2(new Vector2(0, 0)); GL.Vertex2(new Vector2(0, 0));
GL.TexCoord2(new Vector2(0.125F, 0)); GL.Vertex2(new Vector2(size.Width, 0));
GL.TexCoord2(new Vector2(0.125F, -1)); GL.Vertex2(new Vector2(size.Width, size.Height));
GL.TexCoord2(new Vector2(0, -1)); GL.Vertex2(new Vector2(0, size.Height));
GL.End();
GL.Disable(EnableCap.Blend);
Register texture:
Bitmap bitmap = new Bitmap(path);
int texture = 0;
GL.Enable(EnableCap.Texture2D);
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
GL.GenTextures(1, out texture);
GL.BindTexture(TextureTarget.Texture2D, texture);
GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bitmap.UnlockBits(data);
The problem seems to be that the Texture gets repeated. If a texture coodinate is not in the range 0 to 1 OpenGL "takes" the color of the "opposite side". The artifacts then appear due to calculation imprecision.
You might try setting the two texture parameters
TextureParameterName.TextureWrapS
TextureParameterName.TextureWrapT
to
TextureWrapMode.ClampToEdge
Using GL.TexParameter()
Also consider to replace the -1 with 1.
I am trying to initialize a SKPhysicsBody with a Polygon from a CGPath.
It is meant to look like this:
My code CGPath is configured like this:
-(CGPathRef)GetStarPath{
//Draw Object 1
{
//Create Path
CGMutablePathRef path = CGPathCreateMutable();
CGPoint pos = CGPointMake(177, 184.42); //Center Position
CGAffineTransform trans = CGAffineTransformMake(1, 0, 0, 1, pos.x, pos.y); //Transform of object
{ //SubPath 0
CGFloat d[] = {-3.0518e-05,-14.924,-3.0518e-05,-14.924,4.8492,-5.0988,4.8492,-5.0988, 4.8492,-5.0988,4.8492,-5.0988,15.692,-3.5232,15.692,-3.5232, 15.692,-3.5232,15.692,-3.5232,7.8462,4.125,7.8462,4.125, 7.8462,4.125,7.8462,4.125,9.6983,14.924,9.6983,14.924, 9.6983,14.924,9.6983,14.924,-9.1553e-05,9.8256,-9.1553e-05,9.8256, -9.1553e-05,9.8256,-9.1553e-05,9.8256,-9.6986,14.924,-9.6986,14.924, -9.6986,14.924,-9.6986,14.924,-7.8463,4.1249,-7.8463,4.1249, -7.8463,4.1249,-7.8463,4.1249,-15.692,-3.5234,-15.692,-3.5234, -15.692,-3.5234,-15.692,-3.5234,-4.8492,-5.0989,-4.8492,-5.0989, -4.8492,-5.0989,-4.8492,-5.0989,-3.0518e-05,-14.924,-3.0518e-05,-14.924 };
CGPathMoveToPoint(path, &trans, d[0], d[1]);
for(int i=0; i<10; i++)
{
CGPathAddCurveToPoint(path, &trans, d[i*8+2], d[i*8+3], d[i*8+4], d[i*8+5], d[i*8+6], d[i*8+7]);
}
CGPathCloseSubpath(path);
}
return path;
}
}
But xcode throws some weird exception:
Assertion failed: (edge.LengthSquared() > 1.19209290e-7F *
1.19209290e-7F), function Set, file /SourceCache/PhysicsKit/PhysicsKit-4.6/PhysicsKit/Box2D/Collision/Shapes/b2PolygonShape.cpp,
line 176.
(exception breakpoint stops at [SKPhysicsBody bodyWithPolygonFromPath:star_path];)
Any help appreciated.
First of all, the underlying Box2d error message states that one of your dimensions is simply too small to handle.
Second, creating a star-shaped body will result in unexpected behaviour - as it states in the documentation, bodyWithPolygonFromPath: will only accept convex paths for polygons (no angles over 180 degrees inside the polygon - your star has 5 of them). It should also have no self-intersections and the winding is expected to be counter-clockwise.
If the stars are small enough, you can try using a circular physics body (bodyWithCircleOfRadius:) underneath as an approximation. If you insist on having a star object, you can try adding several physics bodies as children to a single node to have a star-shaped body: one pentagon and 5 triangles attached to it.
I'm trying to fill a round circle (transparent other than the outline of the circle) in an ImageView.
I have the code working:
public void setPercentage(int p) {
if (this.percentage != p ) {
this.percentage = p;
this.invalidate();
}
}
#Override public void onDraw(Canvas canvas) {
Canvas tempCanvas;
Paint paint;
Bitmap bmCircle = null;
if (this.getWidth() == 0 || this.getHeight() == 0 )
return ; // nothing to do
mergedLayersBitmap = Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
tempCanvas = new Canvas(mergedLayersBitmap);
paint = new Paint(Paint.ANTI_ALIAS_FLAG);
paint.setStyle(Paint.Style.FILL_AND_STROKE);
paint.setFilterBitmap(false);
bmCircle = drawCircle(this.getWidth(), this.getHeight());
tempCanvas.drawBitmap(bmCircle, 0, 0, paint);
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
tempCanvas.clipRect(0,0, this.getWidth(), (int) FloatMath.floor(this.getHeight() - this.getHeight() * ( percentage/100)));
tempCanvas.drawColor(0xFF660000, PorterDuff.Mode.CLEAR);
canvas.drawBitmap(mergedLayersBitmap, null, new RectF(0,0, this.getWidth(), this.getHeight()), new Paint());
canvas.drawBitmap(mergedLayersBitmap, 0, 0, new Paint());
}
static Bitmap drawCircle(int w, int h) {
Bitmap bm = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bm);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
p.setColor(drawColor);
c.drawOval(new RectF(0, 0, w, h), p);
return bm;
}
It kind of works. However, I have two issues: I run out of memory quickly and the GC goes crazy. How can I utilize the least amount of memory for this operation?
I know I Shouldn't be instantiating objects in onDraw, however I'm not sure where to draw then. Thank you.
pseudo would look something like this.
for each pixel inside CircleBitmap {
if (pixel.y is < Yboundary && pixelIsInCircle(pixel.x, pixel.y)) {
CircleBitmap .setPixel(x, y, Color.rgb(45, 127, 0));
}
}
that may be slow, but it would work, and the smaller the circle the faster it would go.
just know the basics, bitmap width and height, for example 256x256, the circles radius, and to make things easy make the circle centered at 128,128. then as you go pixel by pixel, check the pixels X and Y to see if it falls inside the circle, and below the Y limit line.
then just use:
CircleBitmap .setPixel(x, y, Color.rgb(45, 127, 0));
edit: to speed things up, don't even bother looking at the pixels above the Y limit.
in case if you want to see another solution (perhaps cleaner), look at this link, filling a circle gradually from bottom to top android