How can I join together multiple Texture2D's into one large Texture2D? I am trying to optimize an isometric tile game by splitting the map up into chunks.
I have tried googling it, and found articles regarding "RenderTarget2D", but am unsure how to implement this.
Thanks,
Sam.
Never mind - I worked it out.
For anyone who is also looking for this, you basically draw onto a "RenderTarget2D", as you would onto the screen, using the spriteBatch.
(helpful article)
RenderTarget2D render; //declare target
render = new RenderTarget2D(GraphicsDevice, (int)(tileSize.X * numberOfTiles.X), (int)(tileSize.Y * numberOfTiles.Y), 0, SurfaceFormat.Color); //assign target, where tileSize is the size of a tile and numberOfTiles is the number of tiles you are rendering
GraphicsDevice.SetRenderTarget(0, render); //Target the render instead of the backbuffer
batch.Begin();
//draw each tile
batch.End();
GraphicsDevice.SetRenderTarget(0, null); //target the backbuffer again
Texture2D myTexture = render.GetTexture(); //store texture in Texture2D variable
Sorry for the rather poor explanation - my first try at a tutorial.
Related
I try to add a B&W filter to the camera images of an ARSCNView, then render colored AR objects over it.
I'am almost there with the following code added to the beginning of - (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
CVPixelBufferRef bg=self.sceneView.session.currentFrame.capturedImage;
if(bg){
char* k1 = CVPixelBufferGetBaseAddressOfPlane(bg, 1);
if(k1){
size_t x1 = CVPixelBufferGetWidthOfPlane(bg, 1);
size_t y1 = CVPixelBufferGetHeightOfPlane(bg, 1);
memset(k1, 128, x1*y1*2);
}
}
This works really fast on mobile, but here's the thing: sometimes a colored frame is displayed.
I've checked and my filtering code is executed but I assume it's too late, SceneKit's pipeline already processed camera input.
Calling the code earlier would help, but updateAtTime is the earliest point one can add custom frame by frame code.
Getting notifications on frame captures might help, but looks like the whole AVCapturesession is unaccessible.
The Metal ARKit example shows how to convert the camera image to RGB and that is the place where I would do filtering, but that shader is hidden when using SceneKit.
I've tried this possible answer but it's way too slow.
So how can I overcome the frame misses and convert the camera feed reliably to BW?
Here's the key for this problem:
session:didUpdateFrame:
Provides a newly captured camera image and accompanying AR information to the delegate.
So just moved CVPixelBufferRef manipulation, the image filtering code from
- (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
to
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame
Made sure to set self.sceneView.session.delegate = self to have this delegate called.
I have a circle-shaped dynamic body and I need to resize it during the game (It appears like a point, then it grows to a circle and after that it starts moving). How should I do that?
I have an idea - it's to use some animation (Circle has the same radius, but due to animation it looks like the circle grows), but I'm not sure if it's right way or not. (Besides I don't know how to realize it)
For scaling circle, if you are using sprite just scale it sprite.setScale(float), if your sprite is attached to Box2d Circle-shape then get the Body's shape and set the radius
Shape shape = body.getFixture().getShape;
shape.setRadius(radiusValue);
and if you are using ShapeRenderer just multiply the points of ShapeRenderer.
I assume that you are talking about a Box2D body.
It is not possible to change a circle-shaped fixture with Box2D. Box2D is a rigid body simulator. What you would have to do is destroy the fixture and replace it with a smaller/bigger version of the circle. But this will cause a lot of problems, since you cannot destroy a fixture when there is still a contact for example.
It would be better to keep the circle the same size and just simulate a change in size with an animation of a texture on top.
If you cannot simulate that, then maybe try the following approach: Have several versions of that circle in different sizes and keep them on top of each other. Implement a ContactFilter which will only cause contacts for the one circle which is currently "active".
Inside any Object class with box2d, I use the following for dynamic resizing:
public void resize(float newradius) {
this.body.destroyFixture(this.fixture);
fixtureDef.density = (float) (this.mass/(Math.PI*newradius*newradius));
this.radius = newradius;
CircleShape circle = new CircleShape();
circle.setRadius(newradius);
this.fixtureDef.shape = circle;
circle.dispose();
this.fixture = body.createFixture(fixtureDef);
this.fixture.setUserData(this);
}
You can also see the following topic: How to change size after it has been created
I have an application that displays graphs and because the results are often interesting (either due to bugs or intentionally) I wan't an ability to quickly save screenshots. So I made a screenshot button.
I used the code from wxWidgets forum FAQ but unfortunatelly, this method only saves images on screenshots (applies on fullscreen screenshot as well). Everything else is left transparent.
For some reason, this only happens with PNG image export. Exporting as BMP or JPG is just fine.
There must be something wrong with:
screenshot.SaveFile("image.png", wxBITMAP_TYPE_PNG);
I have the PNG procesor loaded in wxWidgets:
wxImage::AddHandler(new wxPNGHandler);
Code:
//Create a DC for the main window
wxClientDC dcScreen(GetParent());
//Get the size of the screen/DC
wxCoord screenWidth, screenHeight;
dcScreen.GetSize(&screenWidth, &screenHeight);
//Create a Bitmap that will later on hold the screenshot image
//Note that the Bitmap must have a size big enough to hold the screenshot
//-1 means using the current default colour depth
screenshot.Create(screenWidth, screenHeight,-1);
//Create a memory DC that will be used for actually taking the screenshot
wxMemoryDC memDC;
//Tell the memory DC to use our Bitmap
//all drawing action on the memory DC will go to the Bitmap now
memDC.SelectObject(screenshot);
//Blit (in this case copy) the actual screen on the memory DC
//and thus the Bitmap
memDC.Blit( 0, //Copy to this X coordinate
0, //Copy to this Y coordinate
screenWidth, //Copy this width
screenHeight, //Copy this height
&dcScreen, //From where do we copy?
0, //What's the X offset in the original DC?
0 //What's the Y offset in the original DC?
);
//Select the Bitmap out of the memory DC by selecting a new
//uninitialized Bitmap
memDC.SelectObject(wxNullBitmap);
Images:
Instead of (made with Alt+PrintSreen in windows):
If the image comes out correctly in BMP but not PNG, the problem is probably due to the transparency, i.e. somehow all the rest of the image must have its alpha channel set to wxIMAGE_ALPHA_TRANSPARENT. If this is really the case, then using
wxImage image = bmp.ConvertToImage();
image.ClearAlpha();
image.SaveFile("foo.png", wxBITMAP_TYPE_PNG);
should help, but I still have no idea why would it be transparent in the first place.
If this still happens with wxWidgets 3.0 (currently RC2 is available, final will be next week) and if you can find a simple of reproducing the problem, it would be worth reporting it as a bug.
You should initialize the bitmap to any object like staticbitmap before this line:
memDC.SelectObject(wxNullBitmap);
This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?
I'm fairly new to wxWidgets so please bear with me. Let's say I have a 10Kx10K image and my wxScrolledWindow has a size of 640x480. I load the whole image into a wxBitmap which I use in my paint function.
Now in my OnPaint function I just say
wxPaintDC dc(this);
dc.DrawBitmap(_Bitmap, 0, 0 );
This somewhat works for the first few paints but soon the Window content is out order and artifacts appear. This happens very fast when I move a scroll bar back and forth very quickly.
I use the latest wxWidgets on a Windows 7 machine.
So, how can I improve my painting code?
Many thanks,
Christian
Using a 10000x10000 wxBitmap is a bad idea on its own, it may simply fail to be created on an older system (that's 400MiB of video RAM!). Drawing it entirely is sheer madness.
I don't know where does your data come from but in a typical case of e.g. a map to be shown on screen, you should break it into tiles, convert the tiles that are currently visible on screen to wxBitmap (or several of them) and draw only those.
Then you may optimize your drawing by using double buffering (which is relatively useless under Windows 7 that double buffers everything on its own) and otherwise, but you should be using a reasonably-sized backing store bitmap.
This sounds like something that might be helped by using double buffering.
The first thing to start trying is to replace wxPaintDC with wxBufferedPaintDC
For more suggestions, here is a wiki article on the subject: http://wiki.wxwidgets.org/Flicker-Free_Drawing
As Ravenspoint kindly pointed out, there is an article on wxWidgets' wiki. So according to that article two things need to happen. First override the EVT_ERASE_BACKGROUND with an empty function.
void Canvas::EraseBackground( wxEraseEvent& WXUNUSED(event))
{
}
And second to implement a basic double buffering scheme. Here is how I did it.
void Canvas::OnPaint(wxPaintEvent& WXUNUSED(event))
{
int x, y;
GetViewStart(&x, &y);
wxRect Client_Area = GetClientRect();
int width = Client_Area.width;
int height = Client_Area.height;
wxBitmap Current = _Bitmap.GetSubBitmap(wxRect( x * 10, y * 10, width, height ));
wxPaintDC dc(this);
dc.DrawBitmap(Current, 0, 0, false );
}
My scroll rate for both x and y is set to 10. That's why I multiply the view start coordinates.
Any more insight is very welcome.
Thanks,
Christian