OpenTK C# artifact on the texture - opentk

I'm start learning OpenGL and find a problem with my texture. I have a clear texture in png format, which I set on the quad. After testing I found some strange lines.
How I can remove that lines?
Image with bug
Draw scene:
GL.BindTexture(TextureTarget.Texture2D, TextureId);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.Enable(EnableCap.Blend);
GL.Begin(BeginMode.Quads);
GL.TexCoord2(new Vector2(0, 0)); GL.Vertex2(new Vector2(0, 0));
GL.TexCoord2(new Vector2(0.125F, 0)); GL.Vertex2(new Vector2(size.Width, 0));
GL.TexCoord2(new Vector2(0.125F, -1)); GL.Vertex2(new Vector2(size.Width, size.Height));
GL.TexCoord2(new Vector2(0, -1)); GL.Vertex2(new Vector2(0, size.Height));
GL.End();
GL.Disable(EnableCap.Blend);
Register texture:
Bitmap bitmap = new Bitmap(path);
int texture = 0;
GL.Enable(EnableCap.Texture2D);
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
GL.GenTextures(1, out texture);
GL.BindTexture(TextureTarget.Texture2D, texture);
GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bitmap.UnlockBits(data);

The problem seems to be that the Texture gets repeated. If a texture coodinate is not in the range 0 to 1 OpenGL "takes" the color of the "opposite side". The artifacts then appear due to calculation imprecision.
You might try setting the two texture parameters
TextureParameterName.TextureWrapS
TextureParameterName.TextureWrapT
to
TextureWrapMode.ClampToEdge
Using GL.TexParameter()
Also consider to replace the -1 with 1.

Related

<Vulkan> Use rendered vkImage as Texture

I want to use a vkImage rendered at a previous render pass as Texture to do the composite operation in a fragment shader. From here I learned vkCmdPipelineBarrier is used to wait for GPU finish a rendering operation and I write this code. It works well on Snapdragon devices. But not on Mali-G52. The Write-after-write error is partly happed. Is this code not enough? Any suggestions?
vkCmdEndRenderPass(cb);
vkCmdBeginRenderPass(cb, &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
VkViewport viewport = vks::initializers::viewport((float)offscreenPass.width, (float)offscreenPass.height, 0.0f, 1.0f);
vkCmdSetViewport(cb, 0, 1, &viewport);
VkRect2D scissor = vks::initializers::rect2D(offscreenPass.width, offscreenPass.height, 0, 0);
vkCmdSetScissor(cb, 0, 1, &scissor);
// https://github.com/KhronosGroup/Vulkan-Samples/blob/master/samples/performance/pipeline_barriers/pipeline_barriers.cpp
VkImageMemoryBarrier imageMemoryBarrier = vks::initializers::imageMemoryBarrier();
imageMemoryBarrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imageMemoryBarrier.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
imageMemoryBarrier.srcAccessMask = 0;
imageMemoryBarrier.dstAccessMask = 0;
imageMemoryBarrier.image = offscreenPass.color[drawframe].image;
imageMemoryBarrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageMemoryBarrier.subresourceRange.baseMipLevel = 0;
imageMemoryBarrier.subresourceRange.levelCount = 1;
imageMemoryBarrier.subresourceRange.baseArrayLayer = 0;
imageMemoryBarrier.subresourceRange.layerCount = 1;
vkCmdPipelineBarrier(
cb,
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
0, 0, nullptr, 0, nullptr, 1, &imageMemoryBarrier);
imageMemoryBarrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imageMemoryBarrier.newLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL;
imageMemoryBarrier.image = offscreenPass.depth.image;
imageMemoryBarrier.srcAccessMask = 0;
imageMemoryBarrier.dstAccessMask = 0;
vkCmdPipelineBarrier(
cb,
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
0, 0, nullptr, 0, nullptr, 1, &imageMemoryBarrier);
I have tried every pattern written here.
If you want to synchronize render passes then your pipeline barrier must be outside of the render pass in the command stream. I.e. it must be after the vkCmdEndRenderPass() of the first pass, and before the vkCmdBeginRenderPass() of the second pass. Pipeline barriers issued inside a render pass, as you are currently doing, are used for synchronization only within the current subpass.
Also, try to avoid:
srcStage=VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT
dstStage=VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
... for pipeline barriers when you only consume the output of the first pass as a fragment shader input in the second. This is overly conservative and needlessly serializes execution of the geometry processing too. In this case, you should use:
srcStage=VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT
dstStage=VK_PIPELINE_STAGE_FRAGMENT_BIT
... which allows the non-dependent vertex shading and binning for the second pass to run in parallel to the first pass.
Self solved.
The difference in the precision of sampler2D between Adreno and Mali causes this issue. I can read correct data using "precision highp sampler2D".

android how to scale down a bitmap and keep its aspect ration

using 3rd party library which returns a bitmap. in the app it would like to scale down the bitmap.
static public Bitmap getResizedBitmap(Bitmap bm, int newWidth, int newHeight) {
int width = bm.getWidth();
int height = bm.getHeight();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height,
matrix, false);
return resizedBitmap;
}
===
Bitmap doScaleDownBitmap() {
Bitmap bitmap = libGetBitmap(); // got the bitmap from the lib
int width = bitmap.getWidth();
int height = bitmap.getHeight();
if (width > 320 || height > 160) {
bitmap = getResizedBitmap(bitmap, 320, 160);
}
System.out.println("+++ width;"+width+", height:"+height+ ", return bmp.w :"+bitmap.getWidth()+", bmp.h:"+bitmap.getHeight());
return bitmap;
}
the log for a test bitmap (348x96):
+++ width;348, height:96, return bmp.w :320, bmp.h:160
looks like the resized bitmap does not scale properly, shouldnt it be 320 x 88 to maintain the aspect ratio?
(it did from (348x96) ==> (320x160))
saw android sample
public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId,
int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds=true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(res, resId, options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeResource(res, resId, options);
}
how to apply it if has the bitmap already?
or what is the correct way to scale down a bitmap?
EDIT:
this one could keep the aspect ration and one of the desired dimensions (either width or height) will be used for the generated bitmap. basically CENTER_FIT.
However it does not generate the bitmap with both desired width and height.
e.g. would like to have a new bitmap of (w:240 x h:120) from a src bitmap of (w:300 x h:600), it will map to (w:60 x h:120).
I guess it needs extra operation on top of this new bitmap if want the new bitmap has (w:240 x h:120).
is there a simpler way to do it?
public static Bitmap scaleBitmapAndKeepRation(Bitmap srcBmp, int dstWidth, int dstHeight) {
Matrix matrix = new Matrix();
matrix.setRectToRect(new RectF(0, 0, srcBmp.getWidth(), srcBmp.getHeight()),
new RectF(0, 0, dstWidth, dstHeight),
Matrix.ScaleToFit.CENTER);
Bitmap scaledBitmap = Bitmap.createBitmap(srcBmp, 0, 0, srcBmp.getWidth(), srcBmp.getHeight(), matrix, true);
return scaledBitmap;
}
When you Scale-Down the bitmap, if width and height are not divisible by scale, you should expect tiny change in ratio. if you don't want that, first crop the image to be divisible and then scale.
float scale=0.5f;
scaledBitmap=Bitmap.createScaledBitmap(bitmap,
(int)(bitmap.width()*scale),
(int)(bitmap.height()*scale),
true); //bilinear filtering
Found a way, I am sure there is better one
public static Bitmap updated_scaleBitmapAndKeepRation(Bitmap srcBitmap, int targetBmpWidth,
int targetBmpHeight) {
int width = srcBitmap.getWidth();
int height = srcBitmap.getHeight();
if (targetBmpHeight > 0 && targetBmpWidth > 0 && (width != targetBmpWidth || height != targetBmpHeight)) {
// create a canvas with the specified bitmap to draw into.
Bitmap scaledImage = Bitmap.createBitmap(targetBmpWidth, targetBmpHeight, Bitmap.Config.ARGB_4444);
Canvas canvas = new Canvas(scaledImage);
// draw transparent background color
Paint paint = new Paint();
paint.setColor(Color.TRANSPARENT);
paint.setStyle(Paint.Style.FILL);
canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(), paint);
// draw the source bitmap on canvas and scale the image with center_fit (the source image's larger side is mapped to the corresponding desired dimensions, and the other side scaled with aspect ration)
Matrix matrix = new Matrix();
matrix.setRectToRect(new RectF(0, 0, srcBitmap.getWidth(), srcBitmap.getHeight()),
new RectF(0, 0, targetBmpWidth, targetBmpHeight),
Matrix.ScaleToFit.CENTER);
canvas.drawBitmap(srcBitmap, matrix, null);
return scaledImage;
} else {
return srcBitmap;
}
}
The result screenshot:
The 1st image is the src (w:1680 x h:780),
2nd is from the scaleBitmapAndKeepRation() in the question part, which has scaled image but with dimensions (w:60 x h:120) not in desired dimensions (w: 240 x h:120),
3rd is the one does not keep the aspect ration, although has the dimension right.
4th is from the updated_scaleBitmapAndKeepRation() which has the desired dimensions and the image is center_fit and keep the aspect ratio.

iOS OpenGL ES2 make change in vertex buffer object

I have a terrain in OpenGL. I want to dynamicly change the space between points.
But when the vertex data is send to the vertex buffer object, i cannot modify anything.
The only thing i can do is delete the VBO and create a replacement VBO with new positions of each point.
Is there a best way to do this ?
As mentioned in the comments, it sounds like you want glBufferSubData.
If you planned to modify the data often, first setup your VBO's initial state:
float[] positions = { 0, 0, 0, 0, 0, 0 };
int numberOfPositions = 6;
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * numberOfPositions, positions, GL_DYNAMIC_DRAW);
Then later say you want to change the last two values to 1, you would do this:
float[] update = { 1, 1 };
int offset = 4
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float) * offset, update);
Check out the docs.gl page on glBufferSubData for more information.

How to make a simple screenshot method using LWJGL?

So basically I was messing about with LWJGL for a while now, and I came to a sudden stop with with annoyances surrounding glReadPixels().
And why it will only read from left-bottom -> top-right.
So I am here to answer my own question since I figured all this stuff out, And I am hoping my discoveries might be of some use to someone else.
As a side-note I am using:
glOrtho(0, WIDTH, 0 , HEIGHT, 1, -1);
So here it is my screen-capture code which can be implemented in any LWJGL application C:
//=========================getScreenImage==================================//
private void screenShot(){
//Creating an rbg array of total pixels
int[] pixels = new int[WIDTH * HEIGHT];
int bindex;
// allocate space for RBG pixels
ByteBuffer fb = ByteBuffer.allocateDirect(WIDTH * HEIGHT * 3);
// grab a copy of the current frame contents as RGB
glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, fb);
BufferedImage imageIn = new BufferedImage(WIDTH, HEIGHT,BufferedImage.TYPE_INT_RGB);
// convert RGB data in ByteBuffer to integer array
for (int i=0; i < pixels.length; i++) {
bindex = i * 3;
pixels[i] =
((fb.get(bindex) << 16)) +
((fb.get(bindex+1) << 8)) +
((fb.get(bindex+2) << 0));
}
//Allocate colored pixel to buffered Image
imageIn.setRGB(0, 0, WIDTH, HEIGHT, pixels, 0 , WIDTH);
//Creating the transformation direction (horizontal)
AffineTransform at = AffineTransform.getScaleInstance(1, -1);
at.translate(0, -imageIn.getHeight(null));
//Applying transformation
AffineTransformOp opRotated = new AffineTransformOp(at, AffineTransformOp.TYPE_BILINEAR);
BufferedImage imageOut = opRotated.filter(imageIn, null);
try {//Try to screate image, else show exception.
ImageIO.write(imageOut, format , fileLoc);
}
catch (Exception e) {
System.out.println("ScreenShot() exception: " +e);
}
}
I hope this has been useful.
For any questions or comments on the code, ask/suggest as you like. C:
Hugs,
Rose.
sorry for the late reply but this is for anybody still looking for a solution.
public static void saveScreenshot() throws Exception {
System.out.println("Saving screenshot!");
Rectangle screenRect = new Rectangle(Display.getX(), Display.getY(), Display.getWidth(), Display.getHeight());
BufferedImage capture = new Robot().createScreenCapture(screenRect);
ImageIO.write(capture, "png", new File("doc/saved/screenshot.png"));
}

Drawing a Globe with OpenGL ES

I am trying to render a globe (sphere with maps on it) with OpenGL ES 1.1 on iOS.
I am able to draw the sphere, and map borders but with one problem: lines that are not facing front in my view, are also being drawn on the screen. Like this:
In the picture, you can see that America renders just fine, but you can see Australia rendered on the back. It is not supposed to be shown because it's in the back of the globe, and BLACK and PURPLE stripes in the globe are not transparent.
Any ideas on what parameters should I be tweaking in order to get a proper globe?
If it helps, I can post the relevant parts of the code. Just ask which part and I will update the question.
Thanks a lot in advance.
Update: This is what I am using for Sphere rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glPolygonOffset(-1.0f, -1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
int x, y;
GLfloat curR, curG, curB;
curR = curG = curB = 0.15f;
for (y=0; y<EARTH_LAT_RES; y++) {
if (y%10 == 0) {
glColor4f(curR, curG, curB, 1.0f);
curR = curR == 0.15f ? 0.6f : 0.15f;
curB = curB == 0.15f ? 0.6f : 0.15f;
}
for (x=0; x<EARTH_LON_RES; x++) {
Vertex3D vs[4];
vs[1] = vertices[x][y];
vs[0] = vertices[x][y+1];
vs[3] = vertices[x+1][y];
vs[2] = vertices[x+1][y+1];
glVertexPointer(3, GL_FLOAT, 0, vs);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
}
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDisable(GL_POLYGON_OFFSET_FILL);
glDisableClientState(GL_VERTEX_ARRAY);
This is what I am using to render the border lines:
// vxp is a data structure with vertex arrays that represent
// border lines
int i;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
for (i=0; i<vxp->nFeatures; i++)
{
glVertexPointer(3, GL_FLOAT, 0, vxp->pFeatures[i].pVerts);
glDrawArrays(GL_LINE_STRIP, 0, vxp->pFeatures[i].nVerts);
}
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
These are the settings I am using before rendering any of the objects:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glEnable(GL_DEPTH_TEST); /* enable depth testing; required for z-buffer */
glEnable(GL_CULL_FACE); /* enable polygon face culling */
glCullFace(GL_BACK);
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glFrustumf (-1.0, 1.0, -1.5, 1.5, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The obvious way, if it doesn't obstruct the rest of your code, is to draw the sphere as a solid object in an invisible way to prime the depth buffer, then let the depth test figure out which of the lines is visible. You can use glPolygonOffset to add an implementation-specific 'small amount' to values that are used for depth calculations, so you can avoid depth-buffer fighting. So it'd be something like:
// add a small bit of offset, so that lines that should be visible aren't
// clipped due to depth rounding errors; note that ES allows GL_POLYGON_OFFSET_FILL
// but not GL_POLYGON_OFFSET_LINE, so we're going to push the polygons back a bit
// in terms of values written to the depth buffer, rather than the more normal
// approach of pulling the lines forward a bit
glPolygonOffset(-1.0, -1.0);
glEnable(GL_POLYGON_OFFSET_FILL);
// disable writes to the colour buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
drawSolidPolygonalSphere();
// enable writing again
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// disable the offset
glDisable(GL_POLYGON_OFFSET_FILL);
drawVectorMap();
So that'll leave values in your depth buffer as though the globe were solid. If that's not acceptable, then the only alternative I can think of is to do visibility calculations on the CPU. You can use glGet to get the current view matrix, determine the normal at each vertex directly from the way you map them to the sphere (it'll just be their location relative to the centre), then draw any line for which at least one vertex returns a negative value for the dot product of the vector from the camera to the point and the normal.