How to correctly paint a huge image - wxwidgets

I'm fairly new to wxWidgets so please bear with me. Let's say I have a 10Kx10K image and my wxScrolledWindow has a size of 640x480. I load the whole image into a wxBitmap which I use in my paint function.
Now in my OnPaint function I just say
wxPaintDC dc(this);
dc.DrawBitmap(_Bitmap, 0, 0 );
This somewhat works for the first few paints but soon the Window content is out order and artifacts appear. This happens very fast when I move a scroll bar back and forth very quickly.
I use the latest wxWidgets on a Windows 7 machine.
So, how can I improve my painting code?
Many thanks,
Christian

Using a 10000x10000 wxBitmap is a bad idea on its own, it may simply fail to be created on an older system (that's 400MiB of video RAM!). Drawing it entirely is sheer madness.
I don't know where does your data come from but in a typical case of e.g. a map to be shown on screen, you should break it into tiles, convert the tiles that are currently visible on screen to wxBitmap (or several of them) and draw only those.
Then you may optimize your drawing by using double buffering (which is relatively useless under Windows 7 that double buffers everything on its own) and otherwise, but you should be using a reasonably-sized backing store bitmap.

This sounds like something that might be helped by using double buffering.
The first thing to start trying is to replace wxPaintDC with wxBufferedPaintDC
For more suggestions, here is a wiki article on the subject: http://wiki.wxwidgets.org/Flicker-Free_Drawing

As Ravenspoint kindly pointed out, there is an article on wxWidgets' wiki. So according to that article two things need to happen. First override the EVT_ERASE_BACKGROUND with an empty function.
void Canvas::EraseBackground( wxEraseEvent& WXUNUSED(event))
{
}
And second to implement a basic double buffering scheme. Here is how I did it.
void Canvas::OnPaint(wxPaintEvent& WXUNUSED(event))
{
int x, y;
GetViewStart(&x, &y);
wxRect Client_Area = GetClientRect();
int width = Client_Area.width;
int height = Client_Area.height;
wxBitmap Current = _Bitmap.GetSubBitmap(wxRect( x * 10, y * 10, width, height ));
wxPaintDC dc(this);
dc.DrawBitmap(Current, 0, 0, false );
}
My scroll rate for both x and y is set to 10. That's why I multiply the view start coordinates.
Any more insight is very welcome.
Thanks,
Christian

Related

libgdx camera position using viewport

I am rather experiences libgdx developer but I struggle with one issue for some time so I decided to ask here.
I use FillViewport, TiledMap, Scene2d and OrtographicCamera. I want the camera to follow my player instance but there are bounds defined (equal to mapsize). It means that camera will never ever leave outside of map, so when player comes to an end of the map camera stops following and he goes to the edge of the screen itself. Maybe sounds complicated but it's simple and I am sure that you know what I mean, it's used in every game.
I calculated 4 values:
minCameraX = camera.viewportWidth / 2;
minCameraY = camera.viewportHeight / 2;
maxCameraX = mapSize.x camera.viewportWidth / 2;
maxCameraY = mapSize.y - camera.viewportHeight / 2;
I removed not necessary stuff like unit conversion, camera.zoom etc. Then I set the camera position like this:
camera.position.set(Math.min(maxCameraX, Math.max(posX, minCameraX)), Math.min(maxCameraY, Math.max(posY, minCameraY)), 0);
(posX, posY is player position) which is basically setting camera to player position but if it's to high or too low it sets it to max or min defined before in right axis. (I also tries MathUtils.clamp() and it works the same.
Everything is perfect until now. Problem occures when aspect ratio changes. By default I use 1280x768 but my phone has 1280x720. Because of that bottom and top edges of the screen are cut off because of the way how FillViewport works. Because of that part of my map is cut off.
I tried to modify maximums and minimums, calculate differences in ratio and adding them to calculations, changing camera size, different viewports and some other stuff but without success.
Can you guys help?
Thanks
I tried solution of noone and Tenfour04 from comments above. Both are not perfect but I am satisified enough i guess:
noone:
camera.position.x = MathUtils.clamp(camera.position.x, screenWidth/2 + leftGutter, UnitConverter.toBox2dUnits(mapSize.x) - screenWidth/2 + rightGutter);
camera.position.y = MathUtils.clamp(camera.position.y, screenHeight/2 + bottomGutter, UnitConverter.toBox2dUnits(mapSize.y) - screenHeight/2 - topGutter);
It worked however only for small spectrum of resolutions. For strange resolutions where aspect ratio is much different than default one I've seen white stripes after border. It means that whole border was printer and some part of the world outside of border. I don't know why
Tenfour04:
I changed viewport to ExtendViewport. Nothing is cut off but in different aspect ratios I also can see world outside of borders.
Solution for both is to clear screen with same color as the border is and background of level separatly which gave satisfying effect in both cases.
It stil has some limitations. As border is part of the world (tiled blocks) it's ok when it has same color. In case border has different colors, rendering one color outside of borders won't be a solution.
Thanks noone and Tenfour04 and I am still opened for suggestions:)
Here are some screenshots:
https://www.dropbox.com/sh/00h947wkzo73zxa/AAADHehAF4WI8aJ8bu4YzB9Va?dl=0
Why don't you use FitViewport instead of FullViewport? That way it won't cut off your screen, right?
It is a little bit late, but I have a solution for you without compromises!
Here width and height are world size in pixels. I use this code with FillViewport and everything works excellent!
float playerX = player.getBody().getPosition().x*PPM;
float playerY = player.getBody().getPosition().y*PPM;
float visibleW = viewport.getWorldWidth()/2 + (float)viewport.getScreenX()/(float)viewport.getScreenWidth()*viewport.getWorldWidth();//half of world visible
float visibleH = viewport.getWorldHeight()/2 + (float)viewport.getScreenY()/(float)viewport.getScreenHeight()*viewport.getWorldHeight();
float cameraPosx=0;
float cameraPosy=0;
if(playerX<visibleW){
cameraPosx = visibleW;
}
else if(playerX>width-visibleW){
cameraPosx = width-visibleW;
}
else{
cameraPosx = playerX;
}
if(playerY<visibleH){
cameraPosy = visibleH;
}
else if(playerY>height-visibleH){
cameraPosy = height-visibleH;
}
else{
cameraPosy = playerY;
}
camera.position.set(cameraPosx,cameraPosy,0);
camera.update();

How to zoom a pdf to the mouse position in javafx 2

I have to zoom a pdf-file thats inside of a ScrollPane.
The ScrollPane itself is inside of a StackPane.
In the beginning I scale my pdf to fit the width of my ScrollPane. As a result of that the pdf-height doesn't fit the ScrollPanes height.
I already managed to zoom, by changing my scaleFactor when using the mousewheel. Unfortunately I can't zoom into a specific point.
I guess I have to change the ScrollPanes values depending on the mouse coordinates, but I just can't find the correct calculation. Can somebody please help me?
For example I tried
scrollPane.setVvalue(e.getY() / scrollPane.getHeight())
With this line of code my view just jumps up or down, depending on whether I click on the upper bound or the lower bound of my viewport.
I also understand that it has to behave like that, but I can't figure it out what has to be added/changed.
I use Jpedal to display my pdf
Hope you understand what I am looking for.
Tell me if you need more information.
Edit:
Here is a snipped of how I managed to drag.
eventRegion.addEventFilter(MouseEvent.MOUSE_PRESSED, e -> {
dragStartX = e.getX();
dragStartY = e.getY();
});
eventRegion.addEventFilter(MouseEvent.MOUSE_DRAGGED, e -> {
double deltaX = dragStartX - e.getX();
double deltaY = dragStartY - e.getY();
scrollPane.setHvalue(Math.min(scrollPane.getHvalue() + deltaX / scrollPane.getWidth(), scrollPane.getHmax()));
scrollPane.setVvalue(Math.min(scrollPane.getVvalue() + deltaY / scrollPane.getHeight(), scrollPane.getVmax()));
e.consume();
});
I think zooming to the mouse position could be done in a similar way, by just setting the Hvalue and Vvalue.
Any ideas how I can calculate these values?
This example has JavaFX 8 code for a zoomable, pannable ScrollPane with zoom to mouse pointer, reset zoom and fit to width of a rectangle which can really be any Node. Be sure to check out the answer to the question to get fitWidth() to work correctly. I am using this solution for an ImageView now, and it is slick.
just for all related questions about "zooming where the mouse is".
I had the same problem and I came up with the following code snippet.
public void setZoom(final double x, final double y, final double factor) {
// save the point before scaling
final Point2D sceneToLocalPointBefore = this.sceneToLocal(x, y);
// do scale
this.setScaleX(factor);
this.setScaleY(factor);
// save the point after scaling
final Point2D sceneToLocalPointAfter = this.sceneToLocal(x, y);
// calculate the difference of before and after the scale
final Point2D diffMousePoint = sceneToLocalPointBefore.subtract(sceneToLocalPointAfter);
// translate the pane in order to point where the mouse is
this.setTranslateX(this.getTranslateX() - diffMousePoint.getX() * this.getScaleX());
this.setTranslateY(this.getTranslateY() - diffMousePoint.getY() * this.getScaleY());
}
The basic idea is to move the underlying Pane to that point where it was before scaling. Important is the fact, that we calculate the mouse position to the local coordinate system of the Pane. After scale we do this just another time and calculate the difference. Once we know the difference we are able to move back the Pane. I think this solution is very easy and straightforward.
My setup in JavaFX is following: I have a javafx.scene.layout.BorderPane as root for my javafx.scene.Scene. In the center I put a Pane. This will be the Pane where I act on (i.e. put other Nodes in..zoom, move..etc.) If anyone is interested in how I actually did it, just mail me.
Good programming!

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?

How do I analyze video stream on iOS?

For example, there are QR scanners which scan video stream in real time and get QR codes info.
I would like to check the light source from the video, if it is on or off, it is quite powerful so it is no problem.
I will probably take a video stream as input, maybe make images of it and analyze images or stream in real time for presence of light source (maybe number of pixels of certain color on the image?)
How do I approach this problem? Maybe there is some source of library?
It sounds like you are asking for information about several discreet steps. There are a multitude of ways to do each of them and if you get stuck on any individual step it would be a good idea to post a question about it individually.
1: Get video Frame
Like chaitanya.varanasi said, AVFoundation Framework is the best way of getting access to an video frame on IOS. If you want something less flexible and quicker try looking at open CV's video capture. The goal of this step is to get access to a pixel buffer from the camera. If you have trouble with this, ask about it specifically.
2: Put pixel buffer into OpenCV
This part is really easy. If you get it from openCV's video capture you are already done. If you get it from an AVFoundation you will need to put it into openCV like this
//Buffer is of type CVImageBufferRef, which is what AVFoundation should be giving you
//I assume it is BGRA or RGBA formatted, if it isn't, change CV_8UC4 to the appropriate format
CVPixelBufferLockBaseAddress( Buffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(Buffer);
int bufferHeight = CVPixelBufferGetHeight(Buffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(Buffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel); //put buffer in open cv, no memory copied
//Process image Here
//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
note I am assuming you plan to do this in OpenCV since you used its tag. Also I assume you can get the OpenCV framework to link to your project. If that is an issue, ask a specific question about it.
3: Process Image
This part is by far the most open ended. All you have said about your problem is that you are trying to detect a strong light source. One very quick and easy way of doing that would be to detect the mean pixel value in a greyscale image. If you get the image in colour you can convert with cvtColor. Then just call Avg on it to get the mean value. Hopefully you can tell if the light is on by how that value fluctuates.
chaitanya.varanasi suggested another option, you should check it out too.
openCV is a very large library that can do a wide wide variety of things. Without knowing more about your problem I don't know what else to tell you.
Look at the AVFoundation Framework from Apple.
Hope it helps!
You can try this method: start by getting all images to an AVCaptureVideoDataOutput. From the method:captureOutput:didOutputSampleBuffer:fromConnection,you can sample/calculate every pixel. Source: answer
Also, you can take a look at this SO question where they check if a pixel is black. If its such a powerful light source, you can take the inverse of the pixel and then determine using a set threshold for black.
The above sample code only provides access to the pixel values stored in the buffer; you cannot run any other commands but those that change those values on a pixel-by-pixel basis:
for ( uint32_t y = 0; y < height; y++ )
{
for ( uint32_t x = 0; x < width; x++ )
{
bgraImage.at<cv::Vec<uint8_t,4> >(y,x)[1] = 0;
}
}
This—to use your example—will not work with the code you provided:
cv::Mat bgraImage = cv::Mat( (int)height, (int)extendedWidth, CV_8UC4, base );
cv::Mat grey = bgraImage.clone();
cv::cvtColor(grey, grey, 44);

Unity 3D Physics

I'm having trouble with physics in unity 3d. I'm wanting my ball to bounce off of walls and go another direction. When the ball hits the wall it just bounces straight back. I have tried changing the direction to be orthogonal to the direction it hits the wall but it doesn't change direction. Due to this, the ball just keeps hitting the wall and bouncing straight back.
Secondly, sometimes the ball goes through the wall. The walls have box colliders while the ball has a sphere collider. They all have continuous dynamic as the collision detection mode.
Here's a link to a similar thread:
http://forum.unity3d.com/threads/22063-I-shot-an-arrow-up-in-the-air...?highlight=shooting+arrow
Personally, I would code the rotation using LookAt as GargarathSunman suggests in this link, but if you want to do it with physics, you'll probably need to build the javelin in at least a couple of parts, as the others suggest in the link, and add different drag and angular drag values to each part,perhaps density as well. If you threw a javelin in a vacuum, it would never land point down because air drag plays such an important part (all things fall at the same rate regardless of mass, thank you Sir Isaac Newton). It's a difficult simulation for the physics engine to get right.
Maybe try to get the collider point between your sphere and your wall then catch your rigidbody velocity and revert it by the collision point normal.
an example of a script to do that ---> (put this script on a wall with collider )
C# script:
public class WallBumper : MonoBehaviour
{
private Vector3 _revertDirection;
public int speedReflectionVector = 2;
/***********************************************
* name : OnCollisionEnter
* return type : void
* Make every gameObject with a RigidBody bounce againt this platform
* ********************************************/
void OnCollisionEnter(Collision e)
{
ContactPoint cp = e.contacts[0];
_revertDirection = Vector3.Reflect(e.rigidbody.velocity, cp.normal * -1);
e.rigidbody.velocity = (_revertDirection.normalized * speedReflectionVector);
}
}
I recently has an issue with a rocket going through targets due to speed and even with continuous dynamic collision detection I couldn't keep this from happening a lot.
I solved this using a script "DontGoThroughThings" posted in wiki.unity3d.com. This uses raycasting between current and previous positions and then ensures the frame ends with the colliders connected for messages an OnTrigger event. Has worked everytime since and it's just a matter of attaching the script so super easy to use.
I think the physics answer is as others have suggested to use multiple components with different drag although typically I think you only want a single RigidBody on the parent. Instead of direction using transform.LookAt you could try and calculate using Quaternion.LookRotation from the rigidbody.velocity. Then use Vector3.Angle to find out how much are are off. The greater the angle diference the more force should be experienced and then use RigidBody.ApplyTorque. Maybe use the Sin(angleDifference) * a constant so less force is applied to torque as you approach proper rotation.
Here is some code I used on my rocket although you'll have to substitute some things as I was pointing toward a fixed target and you'll want to use your velocity.
var rotationDirection = Quaternion.LookRotation(lockedTarget.transform.position - this.transform.position);
var anglesToGo = Vector3.Angle(this.transform.rotation.eulerAngles, rotationDirection.eulerAngles);
if (anglesToGo > RotationVelocity)
{
var rotationDirectionToMake = (rotationDirection * Quaternion.Inverse(this.transform.rotation)).eulerAngles.normalized * RotationVelocity;
transform.Rotate(rotationDirectionToMake);
}