wxWidgets screenshot leaves most of image blank (transparent) - screenshot

I have an application that displays graphs and because the results are often interesting (either due to bugs or intentionally) I wan't an ability to quickly save screenshots. So I made a screenshot button.
I used the code from wxWidgets forum FAQ but unfortunatelly, this method only saves images on screenshots (applies on fullscreen screenshot as well). Everything else is left transparent.
For some reason, this only happens with PNG image export. Exporting as BMP or JPG is just fine.
There must be something wrong with:
screenshot.SaveFile("image.png", wxBITMAP_TYPE_PNG);
I have the PNG procesor loaded in wxWidgets:
wxImage::AddHandler(new wxPNGHandler);
Code:
//Create a DC for the main window
wxClientDC dcScreen(GetParent());
//Get the size of the screen/DC
wxCoord screenWidth, screenHeight;
dcScreen.GetSize(&screenWidth, &screenHeight);
//Create a Bitmap that will later on hold the screenshot image
//Note that the Bitmap must have a size big enough to hold the screenshot
//-1 means using the current default colour depth
screenshot.Create(screenWidth, screenHeight,-1);
//Create a memory DC that will be used for actually taking the screenshot
wxMemoryDC memDC;
//Tell the memory DC to use our Bitmap
//all drawing action on the memory DC will go to the Bitmap now
memDC.SelectObject(screenshot);
//Blit (in this case copy) the actual screen on the memory DC
//and thus the Bitmap
memDC.Blit( 0, //Copy to this X coordinate
0, //Copy to this Y coordinate
screenWidth, //Copy this width
screenHeight, //Copy this height
&dcScreen, //From where do we copy?
0, //What's the X offset in the original DC?
0 //What's the Y offset in the original DC?
);
//Select the Bitmap out of the memory DC by selecting a new
//uninitialized Bitmap
memDC.SelectObject(wxNullBitmap);
Images:
Instead of (made with Alt+PrintSreen in windows):

If the image comes out correctly in BMP but not PNG, the problem is probably due to the transparency, i.e. somehow all the rest of the image must have its alpha channel set to wxIMAGE_ALPHA_TRANSPARENT. If this is really the case, then using
wxImage image = bmp.ConvertToImage();
image.ClearAlpha();
image.SaveFile("foo.png", wxBITMAP_TYPE_PNG);
should help, but I still have no idea why would it be transparent in the first place.
If this still happens with wxWidgets 3.0 (currently RC2 is available, final will be next week) and if you can find a simple of reproducing the problem, it would be worth reporting it as a bug.

You should initialize the bitmap to any object like staticbitmap before this line:
memDC.SelectObject(wxNullBitmap);

Related

Buffer Not Large enough for pixel

I am trying to get a bitmap From byte array
val bitmap_tmp =
Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888)
val buffer = ByteBuffer.wrap(decryptedText)
bitmap_tmp.copyPixelsFromBuffer(buffer)
callback.bitmap(bitmap_tmp)
I am facing a error in the below line :
bitmap_tmp.copyPixelsFromBuffer(buffer)
The Error Reads As:
java.lang.RuntimeException: Buffer not large enough for pixels
I have tried Different Solutions found on stack Like Add the line before error but still it crashes:
buffer.rewind()
However the Weird part is the same code at a different place for the same image [Same image with same dimensions] get perfectly functioned and I get the bitmap but here it crashes.
How do I solve this?
Thanks in Adv
The error message makes it sound like the buffer you're copying from isn't large enough, like it needs to contain at least as many bytes as necessary to overwrite every pixel in your bitmap (which has a set size and pixel config).
The documentation for the method doesn't make it clear, but here's the source for the Bitmap class, and in that method:
if (bufferBytes < bitmapBytes) {
throw new RuntimeException("Buffer not large enough for pixels");
}
So yeah, you can't partially overwrite the bitmap, you need enough data to fill it. And if you check the source, that depends on the buffer's current position and limit (it's not just its capacity, it's how much data is remaining to be read).
If it works elsewhere, I'm guessing decryptedText is different there, or maybe you're creating your Bitmap with a different Bitmap.Config (like ARGB_8888 requires 4 bytes per pixel)

cannot access width/height properties of PImage object in setup()

I'm working with the PImage class. Normally I make 2 PImage objects, load an image into one of them (my input picture) and create a blank image using createImage(), which will become the output. I then use the loadPixels() method to access the data on the input, do some manipulation then set the respective output pixel to the result. I have not had any trouble with this so far.
The dimensions of the input and the output PImage objects need to be the same to make the pixel-by-pixel manipulations as straight forward as possible.
So here is the pickle:
PImage myinput;
PImage myoutput;
void setup() {
size(350, 350);
myinput = loadImage("myfile.jpg");
// the pic is 300 x 300
//myoutput = createImage(myinput.width, myinput.height, RGB);
//I've hardcoded the width and height below
myoutput = createImage(300, 300, RGB);
}
void draw() {
image(myoutput, 0, 0);
}
The result of the above is a black square 300 x 300 which overlaps a grey canvas of 350 x 350. Given the code I've written, this is the result I would expect.
Now, in the above example, I've hardcoded the width and height of 'myoutput' with the line:
myoutput = createImage(300, 300, RGB);
My question relates to the bit that follows:
Instead of hardcoding the values, I would rather do something like this:
myoutput = createImage(myinput.width, myinput.height, RGB);
But it isn't working. I just get a big 350 x 350 grey box. And I'm not sure why. Though I do have my suspicions. When I work with pictures in javascript, I've got wait for the page to load (using an event listener like window.onload() {} etc.) before I can access the width/height properties of the image.
UPDATE:
I saw another post which had the following:
/* #pjs preload="myfile.jpg"; */
So I just included this before I declared my PImage objects and now the following line works.
myoutput = createImage(myinput.width, myinput.height, RGB);
I'm quite confused by the new piece of code.
When you run your sketch in Java mode, you're running as Java. Java loads images synchronously, which means that the code won't continue running until the image is fully loaded. That's why it works in Java mode.
But when you're running using Processing.js, you're running as JavaScript. JavaScript loads images asynchronously, which means that the image is loaded in the background while your code continues. That means you aren't guaranteed that the image is done loading when the next line executes, which is why the image's width and height are unset.
The preload command tells Processing.js to load the images before the sketch starts executing, so that you're guaranteed that the image loads before you try to access its width and height.
From the Processing.js reference:
This directive regulates image preloading, which is required when using loadImage() or requestImage() in a sketch. Using this directive will preload all images indicated between quotes, and comma separated if multiple images are used, so that they will be ready for use when the sketch begins running. As resources are loaded via the AJAX approach, not using this directive will result in the sketch loading an image, and then immediately trying to use this image in some way, even though the browser has not finished downloading and caching it.

JSX (Photoshop) - document resolution in dpi

I'm working with a jsx script in Photoshop that resizes images to a specific size. The resolution is set at 200 dpi. After running the script, I can check this under Image > Image Size.
Problem is, depending on the image, it initially tends to show the resolution in dots/cm instead of dots/inch. The number itself is correct either way, but I'd like to see it mentioned there as the latter. Is there a way to realize this in JSX?
Thanks!
J
The easy way is to open your Info Panel by going to Window > Info, and then click on the x/y coordinates dropdown in the Info Panel and select inches. The dropdown is the + toward the lower-left of the panel, with the little down arrow at the bottom right of the + symbol (The plus is actually an x axis and y axis representing a coordinate plane). After that, when you check under Image > Image Size, it should show you all information in inches instead of centimeters. This should also show you inches anywhere else you look in Photoshop's interface, too, such as the rulers.
An exception would be that when using selection tools, such as the marquee tool with a setting like "fixed size" selected, you can override the units setting by typing in another unit in the Width and Height sections at the top of the window. You can even mix and match units, making a precise selection that is, for example, exactly 250 pixels (px in the Width setting) by 30 points (pt in the Height setting). And when you check your image size, it should still show you results in inches.
And finally, to answer your question as it was asked, the following code will change your rulerUnits preference without opening the Info Panel.
#target Photoshop
preferences.rulerUnits = Units.INCHES;
Note that if you want to write other scripts, you can change the rulerUnits to whatever units the script calls for, and then at the end of the script put your units back the way you had them.
#target Photoshop
// Save the original rulerUnits setting to a variable
var originalRulerUnits = preferences.rulerUnits;
// Change the rulerUnits to Inches
preferences.rulerUnits = Units.INCHES;
//
// Do magical scripty stuff here...
//
// Restore the original setting
preferences.rulerUnits = originalRulerUnits;
// List of rulerUnits settings available
// Units.CM
// Units.INCHES
// Units.MM
// Units.PERCENT
// Units.PICAS
// Units.PIXELS
// Units.POINTS

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?

XNA "Texture2D" amalgamation

How can I join together multiple Texture2D's into one large Texture2D? I am trying to optimize an isometric tile game by splitting the map up into chunks.
I have tried googling it, and found articles regarding "RenderTarget2D", but am unsure how to implement this.
Thanks,
Sam.
Never mind - I worked it out.
For anyone who is also looking for this, you basically draw onto a "RenderTarget2D", as you would onto the screen, using the spriteBatch.
(helpful article)
RenderTarget2D render; //declare target
render = new RenderTarget2D(GraphicsDevice, (int)(tileSize.X * numberOfTiles.X), (int)(tileSize.Y * numberOfTiles.Y), 0, SurfaceFormat.Color); //assign target, where tileSize is the size of a tile and numberOfTiles is the number of tiles you are rendering
GraphicsDevice.SetRenderTarget(0, render); //Target the render instead of the backbuffer
batch.Begin();
//draw each tile
batch.End();
GraphicsDevice.SetRenderTarget(0, null); //target the backbuffer again
Texture2D myTexture = render.GetTexture(); //store texture in Texture2D variable
Sorry for the rather poor explanation - my first try at a tutorial.