This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?
Related
I have a chart with a legend whose symbol I replaced per the example in the docs. It looks like this:
var marker = chart.legend.markers.template;
marker.disposeChildren();
let dollar = marker.createChild(am4core.Image);
dollar.width = 40;
dollar.height = 40;
dollar.verticalCenter = "top";
dollar.horizontalCenter = "left";
dollar.strokeWidth = 2;
dollar.strokeOpacity = 1;
dollar.adapter.add("href", function (href: any, target: any) {
return `http://host.com?id=${target.dataItem.dataContext.dummyData.value`;
});
And this works, my images are displayed, little faces :) - I would like to add a border around the image of the same color of the series so that you can identify the marker in the legend with the series. But I can't find the right set of settings to make this a thing.
Is this possible?
EDIT -
So, I tried the following chage to the above and got a decent result. It's a bit hacky, so there might be a better way. If not, I guess this works.
//marker.disposeChildren(); <= don't do this
marker.width = "50px";
marker.height = "50px";
Basically the original marker remains and is behind the image. The marker has to be made larger so that it sticks out and creates a pseudo border.
I'm going to answer this one myself, since I have a working solution and no one lese answered :)
The edit above does what it is needed. Doesn't see like a great solution, a border around an image should be doable. But this gets us what we want.
Solution:
Make the marker bigger than the image
Place the image above the marker
In this case, we do not remove child elements of the marker, like the sample code on amchart4 shows, since you need it.
I am rather experiences libgdx developer but I struggle with one issue for some time so I decided to ask here.
I use FillViewport, TiledMap, Scene2d and OrtographicCamera. I want the camera to follow my player instance but there are bounds defined (equal to mapsize). It means that camera will never ever leave outside of map, so when player comes to an end of the map camera stops following and he goes to the edge of the screen itself. Maybe sounds complicated but it's simple and I am sure that you know what I mean, it's used in every game.
I calculated 4 values:
minCameraX = camera.viewportWidth / 2;
minCameraY = camera.viewportHeight / 2;
maxCameraX = mapSize.x camera.viewportWidth / 2;
maxCameraY = mapSize.y - camera.viewportHeight / 2;
I removed not necessary stuff like unit conversion, camera.zoom etc. Then I set the camera position like this:
camera.position.set(Math.min(maxCameraX, Math.max(posX, minCameraX)), Math.min(maxCameraY, Math.max(posY, minCameraY)), 0);
(posX, posY is player position) which is basically setting camera to player position but if it's to high or too low it sets it to max or min defined before in right axis. (I also tries MathUtils.clamp() and it works the same.
Everything is perfect until now. Problem occures when aspect ratio changes. By default I use 1280x768 but my phone has 1280x720. Because of that bottom and top edges of the screen are cut off because of the way how FillViewport works. Because of that part of my map is cut off.
I tried to modify maximums and minimums, calculate differences in ratio and adding them to calculations, changing camera size, different viewports and some other stuff but without success.
Can you guys help?
Thanks
I tried solution of noone and Tenfour04 from comments above. Both are not perfect but I am satisified enough i guess:
noone:
camera.position.x = MathUtils.clamp(camera.position.x, screenWidth/2 + leftGutter, UnitConverter.toBox2dUnits(mapSize.x) - screenWidth/2 + rightGutter);
camera.position.y = MathUtils.clamp(camera.position.y, screenHeight/2 + bottomGutter, UnitConverter.toBox2dUnits(mapSize.y) - screenHeight/2 - topGutter);
It worked however only for small spectrum of resolutions. For strange resolutions where aspect ratio is much different than default one I've seen white stripes after border. It means that whole border was printer and some part of the world outside of border. I don't know why
Tenfour04:
I changed viewport to ExtendViewport. Nothing is cut off but in different aspect ratios I also can see world outside of borders.
Solution for both is to clear screen with same color as the border is and background of level separatly which gave satisfying effect in both cases.
It stil has some limitations. As border is part of the world (tiled blocks) it's ok when it has same color. In case border has different colors, rendering one color outside of borders won't be a solution.
Thanks noone and Tenfour04 and I am still opened for suggestions:)
Here are some screenshots:
https://www.dropbox.com/sh/00h947wkzo73zxa/AAADHehAF4WI8aJ8bu4YzB9Va?dl=0
Why don't you use FitViewport instead of FullViewport? That way it won't cut off your screen, right?
It is a little bit late, but I have a solution for you without compromises!
Here width and height are world size in pixels. I use this code with FillViewport and everything works excellent!
float playerX = player.getBody().getPosition().x*PPM;
float playerY = player.getBody().getPosition().y*PPM;
float visibleW = viewport.getWorldWidth()/2 + (float)viewport.getScreenX()/(float)viewport.getScreenWidth()*viewport.getWorldWidth();//half of world visible
float visibleH = viewport.getWorldHeight()/2 + (float)viewport.getScreenY()/(float)viewport.getScreenHeight()*viewport.getWorldHeight();
float cameraPosx=0;
float cameraPosy=0;
if(playerX<visibleW){
cameraPosx = visibleW;
}
else if(playerX>width-visibleW){
cameraPosx = width-visibleW;
}
else{
cameraPosx = playerX;
}
if(playerY<visibleH){
cameraPosy = visibleH;
}
else if(playerY>height-visibleH){
cameraPosy = height-visibleH;
}
else{
cameraPosy = playerY;
}
camera.position.set(cameraPosx,cameraPosy,0);
camera.update();
How can I join together multiple Texture2D's into one large Texture2D? I am trying to optimize an isometric tile game by splitting the map up into chunks.
I have tried googling it, and found articles regarding "RenderTarget2D", but am unsure how to implement this.
Thanks,
Sam.
Never mind - I worked it out.
For anyone who is also looking for this, you basically draw onto a "RenderTarget2D", as you would onto the screen, using the spriteBatch.
(helpful article)
RenderTarget2D render; //declare target
render = new RenderTarget2D(GraphicsDevice, (int)(tileSize.X * numberOfTiles.X), (int)(tileSize.Y * numberOfTiles.Y), 0, SurfaceFormat.Color); //assign target, where tileSize is the size of a tile and numberOfTiles is the number of tiles you are rendering
GraphicsDevice.SetRenderTarget(0, render); //Target the render instead of the backbuffer
batch.Begin();
//draw each tile
batch.End();
GraphicsDevice.SetRenderTarget(0, null); //target the backbuffer again
Texture2D myTexture = render.GetTexture(); //store texture in Texture2D variable
Sorry for the rather poor explanation - my first try at a tutorial.
I'm fairly new to wxWidgets so please bear with me. Let's say I have a 10Kx10K image and my wxScrolledWindow has a size of 640x480. I load the whole image into a wxBitmap which I use in my paint function.
Now in my OnPaint function I just say
wxPaintDC dc(this);
dc.DrawBitmap(_Bitmap, 0, 0 );
This somewhat works for the first few paints but soon the Window content is out order and artifacts appear. This happens very fast when I move a scroll bar back and forth very quickly.
I use the latest wxWidgets on a Windows 7 machine.
So, how can I improve my painting code?
Many thanks,
Christian
Using a 10000x10000 wxBitmap is a bad idea on its own, it may simply fail to be created on an older system (that's 400MiB of video RAM!). Drawing it entirely is sheer madness.
I don't know where does your data come from but in a typical case of e.g. a map to be shown on screen, you should break it into tiles, convert the tiles that are currently visible on screen to wxBitmap (or several of them) and draw only those.
Then you may optimize your drawing by using double buffering (which is relatively useless under Windows 7 that double buffers everything on its own) and otherwise, but you should be using a reasonably-sized backing store bitmap.
This sounds like something that might be helped by using double buffering.
The first thing to start trying is to replace wxPaintDC with wxBufferedPaintDC
For more suggestions, here is a wiki article on the subject: http://wiki.wxwidgets.org/Flicker-Free_Drawing
As Ravenspoint kindly pointed out, there is an article on wxWidgets' wiki. So according to that article two things need to happen. First override the EVT_ERASE_BACKGROUND with an empty function.
void Canvas::EraseBackground( wxEraseEvent& WXUNUSED(event))
{
}
And second to implement a basic double buffering scheme. Here is how I did it.
void Canvas::OnPaint(wxPaintEvent& WXUNUSED(event))
{
int x, y;
GetViewStart(&x, &y);
wxRect Client_Area = GetClientRect();
int width = Client_Area.width;
int height = Client_Area.height;
wxBitmap Current = _Bitmap.GetSubBitmap(wxRect( x * 10, y * 10, width, height ));
wxPaintDC dc(this);
dc.DrawBitmap(Current, 0, 0, false );
}
My scroll rate for both x and y is set to 10. That's why I multiply the view start coordinates.
Any more insight is very welcome.
Thanks,
Christian
I have a Zedgraph textobj which I want to place always in the same x, y position (ASP.NET image). I noticed that the text doesn't always show in the same starting x position. It shifts depending on the text's length. I tried to have the text to have the same length by padding it with spaces. It helped a little but the result is not always consistent. I am using PaneFraction for coordType.
What's the proper method to have a piece of text to always show in the same x position. I am using textobj as a title because the native title property always shows up centered and I need my title be left aligned to the graph.
No, it does not depend on text lenght, however...
It depends on various other things:
Horizontal and vertical align of the text box (see: Location )
Current size of the pane. The font size is scaled dynamically to fit the changing size of the chart.
Counting proper positions to have TextObj (or any other object) always at the same place is quite hard. So you need avoid as much as you can any numbers/fractions in your location coordinates. ZedGraph sometimes calculates the true position in quite odd way then.
You haven't provided any code, so it's hard to tell if and where you made the mistake (if any). But, if I were you, I would do something like that:
TextObj fakeTitle = new TextObj("some title\n ", 0.0, 0.0); // I'm using \n to have additional line - this would give me some space, margin.
fakeTitle.Location.CoordinateFrame = CoordType.ChartFraction;
fakeTitle.Location.AlignH = AlignH.Left; // Left align - that's what you need
fakeTitle.Location.AlignV = AlignV.Bottom; // Bottom - it means, that left bottom corner of your object would be located at the left top corner of the chart (point (0,0))
fakeTitle.FontSpec.Border.IsVisible = false; // Disable the border
fakeTitle.FontSpec.Fill.IsVisible = false; // ... and the fill. You don't need it.
zg1.MasterPane[0].GraphObjList.Add(fakeTitle);
I'm using ChartFraction coordinates instead of PaneFraction (as drharris suggests) coordinates to have the title nicely aligned with the left border of the chart. Otherwise it would be flushed totally to the left side (no margin etc...) - it looks better this way.
But make sure you didn't set too big font size - it could be clipped at the top
Are you using this constructor?
TextObj(text, x, y, coordType, alignH, alignV)
If not, then be sure you're setting alignH to AlignH.Left and alignV to AlignV.Top. Then X and Y should be 0, 0. PaneFraction for the coordType should be the correct option here, unless I'm missing your intent.
Alternatively, you can simply download Zedgraph code, edit it to Left-align the title (or even better, provide an option for this, which should have been done originally), and then use it in production. Beauty of open source.