How to move image in Glcontrol in OpenTk - opentk

I am new to opentk I want to move image from left to right smoothly. How can I do this?
I have tried moving viewport but it doesn't look smooth rendering.
int cnt = 1920;
public void DrawImage(int image)
{
GL.Viewport(new Rectangle(cnt, 0, ScreenWidth, ScreenHeight));
cnt--;
GL.MatrixMode(MatrixMode.Projection);
GL.PushMatrix();
GL.LoadIdentity();
//GL.Ortho(0, 1920, 0, 1080, 0, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.PushMatrix();
GL.LoadIdentity();
GL.Disable(EnableCap.Lighting);
GL.Enable(EnableCap.Texture2D);
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, image);
RunShaders();
GL.Disable(EnableCap.Texture2D);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Projection);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Modelview);
//ErrorCode ec = GL.GetError();
//if (ec != 0)
// System.Console.WriteLine(ec.ToString());
//Console.Read();
glControl1.SwapBuffers();
}

Related

android how to scale down a bitmap and keep its aspect ration

using 3rd party library which returns a bitmap. in the app it would like to scale down the bitmap.
static public Bitmap getResizedBitmap(Bitmap bm, int newWidth, int newHeight) {
int width = bm.getWidth();
int height = bm.getHeight();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height,
matrix, false);
return resizedBitmap;
}
===
Bitmap doScaleDownBitmap() {
Bitmap bitmap = libGetBitmap(); // got the bitmap from the lib
int width = bitmap.getWidth();
int height = bitmap.getHeight();
if (width > 320 || height > 160) {
bitmap = getResizedBitmap(bitmap, 320, 160);
}
System.out.println("+++ width;"+width+", height:"+height+ ", return bmp.w :"+bitmap.getWidth()+", bmp.h:"+bitmap.getHeight());
return bitmap;
}
the log for a test bitmap (348x96):
+++ width;348, height:96, return bmp.w :320, bmp.h:160
looks like the resized bitmap does not scale properly, shouldnt it be 320 x 88 to maintain the aspect ratio?
(it did from (348x96) ==> (320x160))
saw android sample
public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId,
int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds=true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(res, resId, options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeResource(res, resId, options);
}
how to apply it if has the bitmap already?
or what is the correct way to scale down a bitmap?
EDIT:
this one could keep the aspect ration and one of the desired dimensions (either width or height) will be used for the generated bitmap. basically CENTER_FIT.
However it does not generate the bitmap with both desired width and height.
e.g. would like to have a new bitmap of (w:240 x h:120) from a src bitmap of (w:300 x h:600), it will map to (w:60 x h:120).
I guess it needs extra operation on top of this new bitmap if want the new bitmap has (w:240 x h:120).
is there a simpler way to do it?
public static Bitmap scaleBitmapAndKeepRation(Bitmap srcBmp, int dstWidth, int dstHeight) {
Matrix matrix = new Matrix();
matrix.setRectToRect(new RectF(0, 0, srcBmp.getWidth(), srcBmp.getHeight()),
new RectF(0, 0, dstWidth, dstHeight),
Matrix.ScaleToFit.CENTER);
Bitmap scaledBitmap = Bitmap.createBitmap(srcBmp, 0, 0, srcBmp.getWidth(), srcBmp.getHeight(), matrix, true);
return scaledBitmap;
}
When you Scale-Down the bitmap, if width and height are not divisible by scale, you should expect tiny change in ratio. if you don't want that, first crop the image to be divisible and then scale.
float scale=0.5f;
scaledBitmap=Bitmap.createScaledBitmap(bitmap,
(int)(bitmap.width()*scale),
(int)(bitmap.height()*scale),
true); //bilinear filtering
Found a way, I am sure there is better one
public static Bitmap updated_scaleBitmapAndKeepRation(Bitmap srcBitmap, int targetBmpWidth,
int targetBmpHeight) {
int width = srcBitmap.getWidth();
int height = srcBitmap.getHeight();
if (targetBmpHeight > 0 && targetBmpWidth > 0 && (width != targetBmpWidth || height != targetBmpHeight)) {
// create a canvas with the specified bitmap to draw into.
Bitmap scaledImage = Bitmap.createBitmap(targetBmpWidth, targetBmpHeight, Bitmap.Config.ARGB_4444);
Canvas canvas = new Canvas(scaledImage);
// draw transparent background color
Paint paint = new Paint();
paint.setColor(Color.TRANSPARENT);
paint.setStyle(Paint.Style.FILL);
canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(), paint);
// draw the source bitmap on canvas and scale the image with center_fit (the source image's larger side is mapped to the corresponding desired dimensions, and the other side scaled with aspect ration)
Matrix matrix = new Matrix();
matrix.setRectToRect(new RectF(0, 0, srcBitmap.getWidth(), srcBitmap.getHeight()),
new RectF(0, 0, targetBmpWidth, targetBmpHeight),
Matrix.ScaleToFit.CENTER);
canvas.drawBitmap(srcBitmap, matrix, null);
return scaledImage;
} else {
return srcBitmap;
}
}
The result screenshot:
The 1st image is the src (w:1680 x h:780),
2nd is from the scaleBitmapAndKeepRation() in the question part, which has scaled image but with dimensions (w:60 x h:120) not in desired dimensions (w: 240 x h:120),
3rd is the one does not keep the aspect ration, although has the dimension right.
4th is from the updated_scaleBitmapAndKeepRation() which has the desired dimensions and the image is center_fit and keep the aspect ratio.

How to read R32G32_SFLOAT image from gpu in Vulkan

I am able to dump stuff from R32G32B32A32 image for screenshot. I would like to read out a pixel from R32G32_SFLOAT image as well. But the result look weird.
below is my working image dump code(no validation error)
void DumpImageToFile(VkTool::VulkanDevice &device, VkQueue graphics_queue, VkTool::Wrapper::CommandBuffers &command_buffer, VkImage image, uint32_t width, uint32_t height, const char *filename)
{
auto image_create_info = VkTool::Initializer::GenerateImageCreateInfo(VK_IMAGE_TYPE_2D, VK_FORMAT_R8G8B8A8_UNORM, {width, height, 1},
VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT, VK_SAMPLE_COUNT_1_BIT);
VkTool::Wrapper::Image staging_image(device, image_create_info, VK_MEMORY_HEAP_DEVICE_LOCAL_BIT);
auto buffer_create_info = VkTool::Initializer::GenerateBufferCreateInfo(width * height * 4, VK_BUFFER_USAGE_TRANSFER_DST_BIT);
VkTool::Wrapper::Buffer staging_buffer(device, buffer_create_info, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT);
// Copy texture to buffer
command_buffer.Begin();
auto image_memory_barrier = VkTool::Initializer::GenerateImageMemoryBarrier(VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
{ VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }, staging_image.Get());
device.vkCmdPipelineBarrier(command_buffer.Get(), VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0
, 0, nullptr, 0, nullptr, 1, &image_memory_barrier);
image_memory_barrier = VkTool::Initializer::GenerateImageMemoryBarrier(VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
{ VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }, image);
device.vkCmdPipelineBarrier(command_buffer.Get(), VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0
, 0, nullptr, 0, nullptr, 1, &image_memory_barrier);
// Copy!!
VkImageBlit region = {};
region.srcSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 };
region.srcOffsets[0] = { 0, 0, 0 };
region.srcOffsets[1] = { static_cast<int32_t>(width), static_cast<int32_t>(height), 1};
region.dstSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 };
region.dstOffsets[0] = { 0, 0, 0 };
region.dstOffsets[1] = { static_cast<int32_t>(width), static_cast<int32_t>(height), 1 };
device.vkCmdBlitImage(command_buffer.Get(), image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, staging_image.Get(), VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &region, VK_FILTER_LINEAR);
image_memory_barrier = VkTool::Initializer::GenerateImageMemoryBarrier(VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,
{ VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }, image);
device.vkCmdPipelineBarrier(command_buffer.Get(), VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, 0
, 0, nullptr, 0, nullptr, 1, &image_memory_barrier);
image_memory_barrier = VkTool::Initializer::GenerateImageMemoryBarrier(VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
{ VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }, staging_image.Get());
device.vkCmdPipelineBarrier(command_buffer.Get(), VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0
, 0, nullptr, 0, nullptr, 1, &image_memory_barrier);
auto buffer_image_copy = VkTool::Initializer::GenerateBufferImageCopy({ VK_IMAGE_ASPECT_COLOR_BIT , 0, 0, 1 }, { width, height, 1 });
device.vkCmdCopyImageToBuffer(command_buffer.Get(), staging_image.Get(), VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, staging_buffer.Get(), 1, &buffer_image_copy);
command_buffer.End();
std::vector<VkCommandBuffer> raw_command_buffers = command_buffer.GetAll();
auto submit_info = VkTool::Initializer::GenerateSubmitInfo(raw_command_buffers);
VkTool::Wrapper::Fence fence(device);
device.vkQueueSubmit(graphics_queue, 1, &submit_info, fence.Get());
fence.Wait();
fence.Destroy();
const uint8_t *mapped_address = reinterpret_cast<const uint8_t *>(staging_buffer.MapMemory());
lodepng::encode(filename, mapped_address, width, height);
staging_buffer.UnmapMemory();
staging_image.Destroy();
staging_buffer.Destroy();
}
Sorry for the ugly self-made wrapper, there was no official wrapper. Basically, it creates a staging image and buffer. first copy from source image to staging image with vkCmdBlitImage. then use vkCmdCopyImageToBuffer and map the buffer to host memory. This method works on multiple gpus and it does not need to worry about padding.(I guess, correct me if I am wrong).
However, I have no luck to use this method to read R32G32_SFLOAT. at first I thought it was because of endianness until I dump the whole image out.
The image above is I directly convert R32G32_SFLOAT to R8G8B8A8_UNORM, I know it does not make sense. But without changing format, there's still a lot of "hole" in the image and values are deadly wrong.
I am not really sure if it is THE problem, but if I understand your code, you want to put image into filename.
So you want to read from this image. However, you said that the old layout for this image (not the staging one) is UNDEFINED layout. The implementation is free to assume you do not care about data that are stored in it. Use the real layout instead (I think it is COLOR_ATTACHMENT or something like that).
Moreover, you are using one staging image and one staging buffer. I do not really understand why are you doing such a thing? Why not simply use vkCmdCopyImageToBuffer function with image to staging_buffer?
BTW, with Vulkan it is not because one code works on some GPUs that this code is correct.
Also, I think you must use a memory barrier after your transfer to the buffer that implies HOST_STAGE and HOST_READ. In the specification, it is write :
Signaling a fence and waiting on the host does not guarantee that the results of memory accesses will be visible to the host, as the access scope of a memory dependency defined by a fence only includes device access. A memory barrier or other memory dependency must be used to guarantee this. See the description of host access types for more information.
This part of your code seems weird:
image_memory_barrier = VkTool::Initializer::GenerateImageMemoryBarrier(VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }, image);
device.vkCmdPipelineBarrier(command_buffer.Get(), VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0, 0, nullptr, 0, nullptr, 1, &image_memory_barrier);
This basically means that after the barrier your source image may not have any data. UNDEFINED value used as a source layout doesn't guarantee that the contents of an image are preserved.

How to make a simple screenshot method using LWJGL?

So basically I was messing about with LWJGL for a while now, and I came to a sudden stop with with annoyances surrounding glReadPixels().
And why it will only read from left-bottom -> top-right.
So I am here to answer my own question since I figured all this stuff out, And I am hoping my discoveries might be of some use to someone else.
As a side-note I am using:
glOrtho(0, WIDTH, 0 , HEIGHT, 1, -1);
So here it is my screen-capture code which can be implemented in any LWJGL application C:
//=========================getScreenImage==================================//
private void screenShot(){
//Creating an rbg array of total pixels
int[] pixels = new int[WIDTH * HEIGHT];
int bindex;
// allocate space for RBG pixels
ByteBuffer fb = ByteBuffer.allocateDirect(WIDTH * HEIGHT * 3);
// grab a copy of the current frame contents as RGB
glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, fb);
BufferedImage imageIn = new BufferedImage(WIDTH, HEIGHT,BufferedImage.TYPE_INT_RGB);
// convert RGB data in ByteBuffer to integer array
for (int i=0; i < pixels.length; i++) {
bindex = i * 3;
pixels[i] =
((fb.get(bindex) << 16)) +
((fb.get(bindex+1) << 8)) +
((fb.get(bindex+2) << 0));
}
//Allocate colored pixel to buffered Image
imageIn.setRGB(0, 0, WIDTH, HEIGHT, pixels, 0 , WIDTH);
//Creating the transformation direction (horizontal)
AffineTransform at = AffineTransform.getScaleInstance(1, -1);
at.translate(0, -imageIn.getHeight(null));
//Applying transformation
AffineTransformOp opRotated = new AffineTransformOp(at, AffineTransformOp.TYPE_BILINEAR);
BufferedImage imageOut = opRotated.filter(imageIn, null);
try {//Try to screate image, else show exception.
ImageIO.write(imageOut, format , fileLoc);
}
catch (Exception e) {
System.out.println("ScreenShot() exception: " +e);
}
}
I hope this has been useful.
For any questions or comments on the code, ask/suggest as you like. C:
Hugs,
Rose.
sorry for the late reply but this is for anybody still looking for a solution.
public static void saveScreenshot() throws Exception {
System.out.println("Saving screenshot!");
Rectangle screenRect = new Rectangle(Display.getX(), Display.getY(), Display.getWidth(), Display.getHeight());
BufferedImage capture = new Robot().createScreenCapture(screenRect);
ImageIO.write(capture, "png", new File("doc/saved/screenshot.png"));
}

How to draw a texture as background in gtk?

I want to add a texture as background in a gtk container, is it possible?
What I want is similar to the repeat-x repeat-y properties in css, but it's not supported in gtk yet, so, how to do it without any ugly hacks?. Another example is what nautilus have, where you can change the background.
thanks :)
pd:sorry 4 ma english
I did it this way:
private bool draw_background (Cairo.Context cr) {
int width = this.get_allocated_width ();
int height = this.get_allocated_height ();
cr.set_operator (Cairo.Operator.CLEAR);
cr.paint ();
cr.set_operator (Cairo.Operator.OVER);
var background_style = this.get_style_context ();
background_style.render_background (cr, 0, 0, width, height);
background_style.render_frame (cr, 0, 0, width, height);
var pat = new Cairo.Pattern.for_surface (new Cairo.ImageSurface.from_png (Build.PKGDATADIR + "/files/texture.png"));
pat.set_extend (Cairo.Extend.REPEAT);
cr.set_source (pat);
cr.paint_with_alpha (0.6);
return false;
}

Android - Trying to gradually fill a circle bottom to top

I'm trying to fill a round circle (transparent other than the outline of the circle) in an ImageView.
I have the code working:
public void setPercentage(int p) {
if (this.percentage != p ) {
this.percentage = p;
this.invalidate();
}
}
#Override public void onDraw(Canvas canvas) {
Canvas tempCanvas;
Paint paint;
Bitmap bmCircle = null;
if (this.getWidth() == 0 || this.getHeight() == 0 )
return ; // nothing to do
mergedLayersBitmap = Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
tempCanvas = new Canvas(mergedLayersBitmap);
paint = new Paint(Paint.ANTI_ALIAS_FLAG);
paint.setStyle(Paint.Style.FILL_AND_STROKE);
paint.setFilterBitmap(false);
bmCircle = drawCircle(this.getWidth(), this.getHeight());
tempCanvas.drawBitmap(bmCircle, 0, 0, paint);
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
tempCanvas.clipRect(0,0, this.getWidth(), (int) FloatMath.floor(this.getHeight() - this.getHeight() * ( percentage/100)));
tempCanvas.drawColor(0xFF660000, PorterDuff.Mode.CLEAR);
canvas.drawBitmap(mergedLayersBitmap, null, new RectF(0,0, this.getWidth(), this.getHeight()), new Paint());
canvas.drawBitmap(mergedLayersBitmap, 0, 0, new Paint());
}
static Bitmap drawCircle(int w, int h) {
Bitmap bm = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bm);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
p.setColor(drawColor);
c.drawOval(new RectF(0, 0, w, h), p);
return bm;
}
It kind of works. However, I have two issues: I run out of memory quickly and the GC goes crazy. How can I utilize the least amount of memory for this operation?
I know I Shouldn't be instantiating objects in onDraw, however I'm not sure where to draw then. Thank you.
pseudo would look something like this.
for each pixel inside CircleBitmap {
if (pixel.y is < Yboundary && pixelIsInCircle(pixel.x, pixel.y)) {
CircleBitmap .setPixel(x, y, Color.rgb(45, 127, 0));
}
}
that may be slow, but it would work, and the smaller the circle the faster it would go.
just know the basics, bitmap width and height, for example 256x256, the circles radius, and to make things easy make the circle centered at 128,128. then as you go pixel by pixel, check the pixels X and Y to see if it falls inside the circle, and below the Y limit line.
then just use:
CircleBitmap .setPixel(x, y, Color.rgb(45, 127, 0));
edit: to speed things up, don't even bother looking at the pixels above the Y limit.
in case if you want to see another solution (perhaps cleaner), look at this link, filling a circle gradually from bottom to top android