How to get mouse position on mouse wheel event - mouseevent

Contrary to MouseMotionEvent, MouseWheelEvent does not provide the mouse location (the fields x and y are used for horizontal and vertical scrolling instead).
SDL provides SDL_GetMouseState() to retrieve the current mouse position, but it is not expressed in the same coordinate system:
SDL_Event event;
while (SDL_WaitEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION: {
int x, y;
SDL_GetMouseState(&x, &y);
printf("event=(%d, %d) state=(%d, %d)\n",
event.motion.x, event.motion.y, x, y);
}
}
}
When I move the mouse, it prints something like:
event=(700, 184) state=(479, 126)
event=(702, 175) state=(480, 120)
event=(706, 168) state=(485, 111)
It seems that the motion event is expressed relative to the texture or renderer (which is scaled and centered in the window) while the state is expressed relative to the window, in pixels.
Is there a way to get the current mouse state matching the position filled in mouse events?

I ended up by converting the coordinates manually:
void convert_to_renderer_coordinates(SDL_Renderer *renderer, int *x, int *y) {
SDL_Rect viewport;
float scale_x, scale_y;
SDL_RenderGetViewport(renderer, &viewport);
SDL_RenderGetScale(renderer, &scale_x, &scale_y);
*x = (int) (*x / scale_x) - viewport.x;
*y = (int) (*y / scale_y) - viewport.y;
}
I use this function to convert the mouse state coordinates:
SDL_Event event;
while (SDL_WaitEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION: {
int x, y;
SDL_GetMouseState(&x, &y);
convert_to_renderer_coordinates(&x, &y);
printf("event=(%d, %d) state=(%d, %d)\n",
event.motion.x, event.motion.y, x, y);
}
}
}
Now, they match:
event=(1033, 14) state=(1033, 14)
event=(1034, 13) state=(1034, 13)
event=(1034, 11) state=(1036, 10) // this is racy, state already has a new position
event=(1036, 10) state=(1036, 10)
The mouse position is captured when the wheel event is handled (instead of when it is generated), so it is racy. But I think we can't do better with the current API.

Related

Can I use a QPainter to draw a line with a per-vertex color?

I have a custom Qt 5 widget that renders itself using QPainter. I would like to be able to draw a line where each vertex is associated with a different color, and the color is interpolated accordingly along the lines joining the points. Is this possible?
I think you'll need to perform the drawing on a line-by-line basis. Assuming that's acceptable then a QPen initialized with a suitable QLinearGradient should work...
class widget: public QWidget {
using super = QWidget;
public:
explicit widget (QWidget *parent = nullptr)
: super(parent)
{
}
protected:
virtual void paintEvent (QPaintEvent *event) override
{
super::paintEvent(event);
QPainter painter(this);
/*
* Define the corners of a rectangle lying 10 pixels inside
* the current bounding rect.
*/
int left = 10, right = width() - 10;
int top = 10, bottom = height() - 10;
QPoint top_left(left, top);
QPoint top_right(right, top);
QPoint bottom_right(right, bottom);
QPoint bottom_left(left, bottom);
/*
* Insert the points along with their required colours into
* a suitable container.
*/
QVector<QPair<QPoint, QColor>> points;
points << qMakePair(top_left, Qt::red);
points << qMakePair(top_right, Qt::green);
points << qMakePair(bottom_right, Qt::blue);
points << qMakePair(bottom_left, Qt::black);
for (int i = 0; i < points.size(); ++i) {
int e = (i + 1) % points.size();
/*
* Create a suitable linear gradient based on the colours
* required for vertices indexed by i and e.
*/
QLinearGradient gradient;
gradient.setColorAt(0, points[i].second);
gradient.setColorAt(1, points[e].second);
gradient.setStart(points[i].first);
gradient.setFinalStop(points[e].first);
/*
* Set the pen and draw the line.
*/
painter.setPen(QPen(QBrush(gradient), 10.0f));
painter.drawLine(points[i].first, points[e].first);
}
}
};
The above results in something like...
(Note: There may be a better way to achieve this using QPainterPath and QPainterPathStroker but I'm not sure based on the docs. I've looked at.)

Sprite Smooth movement and facing position according to movement

i'm trying to make this interaction with keyboard for movement using some sprites and i got stuck with two situations.
1) The character movement is not going acording to the animation itself (it only begin moving after one second or so while it's already being animated). What i really want it to do is, to move without a "initial acceleration feeling" that i get because of this problem
2) I can't think of a way to make the character face the position it should be facing when the key is released. I'll post the code here, but since it need images to work correctly and is not so small i made a skecth available at this link if you want to check it out: https://www.openprocessing.org/sketch/439572
PImage[] reverseRun = new PImage [16];
PImage[] zeroArray = new PImage [16];
void setup(){
size(800,600);
//Right Facing
for(int i = 0; i < zeroArray.length; i++){
zeroArray[i] = loadImage (i + ".png");
zeroArray[i].resize(155,155);
}
//Left Facing
for( int z = 0; z < reverseRun.length; z++){
reverseRun[z] = loadImage ( "mirror" + z + ".png");
reverseRun[z].resize(155,155);
}
}
void draw(){
frameRate(15);
background(255);
imageMode(CENTER);
if(x > width+10){
x = 0;
} else if (x < - 10){
x = width;}
if (i >= zeroArray.length){
i = 3;} //looping to generate constant motiion
if ( z >= reverseRun.length){
z = 3;} //looping to generate constant motiion
if (isRight) {
image(zeroArray[i], x, 300);
i++;
} //going through the images at the array
else if (isLeft) {
image(reverseRun[z],x,300);
z++;
} going through the images at the array
else if(!isRight){
image(zeroArray[i], x, 300);
i = 0; } //"stoped" sprite
}
}
//movement
float x = 300;
float y = 300;
float i = 0;
float z = 0;
float speed = 25;
boolean isLeft, isRight, isUp, isDown;
void keyPressed() {
setMove(keyCode, true);
if (isLeft ){
x -= speed;
}
if(isRight){
x += speed;
}
}
void keyReleased() {
setMove(keyCode, false);
}
boolean setMove(int k, boolean b) {
switch (k) {
case UP:
return isUp = b;
case DOWN:
return isDown = b;
case LEFT:
return isLeft = b;
case RIGHT:
return isRight = b;
default:
return b; }
}
The movement problem is caused by your operating system setting a delay between key presses. Try this out by going to a text editor and holding down a key. You'll notice that a character shows up immediately, followed by a delay, followed by the character repeating until you release the key.
That delay is also happening between calls to the keyPressed() function. And since you're moving the character (by modifying the x variable) inside the keyPressed() function, you're seeing a delay in the movement.
The solution to this problem is to check which key is pressed instead of relying solely on the keyPressed() function. You could use the keyCode variable inside the draw() function, or you could keep track of which key is pressed using a set of boolean variables.
Note that you're actually already doing that with the isLeft and isRight variables. But you're only checking them in the keyPressed() function, which defeats the purpose of them because of the problem I outlined above.
In other words, move this block from the keyPressed() function so it's inside the draw() function instead:
if (isLeft ){
x -= speed;
}
if(isRight){
x += speed;
}
As for knowing which way to face when the character is not moving, you could do that using another boolean value that keeps track of which direction you're facing.
Side note: you should really try to properly indent your code, as right now it's pretty hard to read.
Shameless self-promotion: I wrote a tutorial on user input in Processing available here.

In Kinect SDK v2.0 how do I map a pixel in the color image to a voxel in the depth image?

I know how to go the other way around. What I am looking for is, given a (x,y) coordinate in the pixel space (of the 1920x1080 image), how do I get the corresponding (if available) (x,y,z) (in meters) of the depth image. I realize that there are more pixels than voxels and it could be possible not to find any, but Microsoft's SDK has a CoordinateMapper class. This exposes the MapColorFrameToCameraSpace function. If I use that, I can get an array of points in the camera space (x,y,z) but I am unable to figure out how to extract the mapping for a specific pixel.
You probably need to use
CoordinateMapper.MapDepthFrameToColorSpace
Find the color space coordinates of all depth point.
Then, compare your pixel coordinate (x, y) with these color space coordinates. My solution is to find the closest point (might have better way), because the mapped coordinates are floats.
If you use C#, here is the code. Hope it helps!
private ushort GetDepthValueFromPixelPoint(KinectSensor kinectSensor, ushort[] depthData, float PixelX, float PixelY)
{
ushort depthValue = 0;
if (null != depthData)
{
ColorSpacePoint[] depP = new ColorSpacePoint[512 * 424];
kinectSensor.CoordinateMapper.MapDepthFrameToColorSpace(_depthData, depP);
int depthIndex = FindClosestIndex(depP, PixelX, PixelY);
if (depthIndex < 0)
Console.WriteLine("-1");
else
{
depthValue = _depthData[depthIndex];
Console.WriteLine(depthValue);
}
}
return depthValue;
}
private int FindClosestIndex(ColorSpacePoint[] depP, float PixelX, float PixelY)
{
int depthIndex = -1;
float closestPoint = float.MaxValue;
for (int j = 0; j < depP.Length; ++j)
{
float dis = DistanceOfTwoPoints(depP[j], PixelX, PixelY);
if (dis < closestPoint)
{
closestPoint = dis;
depthIndex = j;
}
}
return depthIndex;
}
private float DistanceOfTwoPoints(ColorSpacePoint colorSpacePoint, float PixelX, float PixelY)
{
float x = colorSpacePoint.X - PixelX;
float y = colorSpacePoint.Y - PixelY;
return (float)Math.Sqrt(x * x + y * y);
}

Microsoft Kinect SDK depth data to real world coordinates

I'm using the Microsoft Kinect SDK to get the depth and color information from a Kinect and then convert that information into a point cloud. I need the depth information to be in real world coordinates with the centre of the camera as the origin.
I've seen a number of conversion functions but these are apparently for OpenNI and non-Microsoft drivers. I've read that the depth information coming from the Kinect is already in millimetres, and is contained in the 11bits... or something.
How do I convert this bit information into real world coordinates that I can use?
Thanks in advance!
This is catered for within the Kinect for Windows library using the Microsoft.Research.Kinect.Nui.SkeletonEngine class, and the following method:
public Vector DepthImageToSkeleton (
float depthX,
float depthY,
short depthValue
)
This method will map the depth image produced by the Kinect into one that is vector scalable, based on real world measurements.
From there (when I've created a mesh in the past), after enumerating the byte array in the bitmap created by the Kinect depth image, you create a new list of Vector points similar to the following:
var width = image.Image.Width;
var height = image.Image.Height;
var greyIndex = 0;
var points = new List<Vector>();
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
short depth;
switch (image.Type)
{
case ImageType.DepthAndPlayerIndex:
depth = (short)((image.Image.Bits[greyIndex] >> 3) | (image.Image.Bits[greyIndex + 1] << 5));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)x / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
}
break;
case ImageType.Depth: // depth comes back mirrored
depth = (short)((image.Image.Bits[greyIndex] | image.Image.Bits[greyIndex + 1] << 8));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)(width - x - 1) / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
}
break;
}
greyIndex += 2;
}
}
By doing so, the end result from this is a list of vectors stored in millimeters, and if you want centimeters multiply by 100 (etc.).

Rendering painted lines as nodes in Cocos

I'm working on a drawing app for iPad using Cocos-iOS and I'm having performance issues with drawing lines as a type of CCNode. I understand that using draw in a node causes it to be called every time the canvas is repainted and the current code is very heavy if used every time:
for (LineNodePoint *point in self.points) {
start = end;
end = point;
if (start && end) {
float distance = ccpDistance(start.point, end.point);
if (distance > 1) {
int d = (int)distance;
float difx = end.point.x - start.point.x;
float dify = end.point.y - start.point.y;
for (int i = 0; i < d; i++) {
float delta = i / distance;
[[self.brush sprite] setPosition:ccp(start.point.x + (difx * delta), start.point.y + (dify * delta))];
[[self.brush sprite] visit];
}
}
}
}
Very heavy...
I either need a better way to draw the lines or to be able to cache the drawing as a raster.
Thanks in advance for any help.
How about ccDrawLine or CCMutableTexture? CCMutableTexture is for manipulating pixels using CCRenderTexture internally as you said.
ccDrawLine
cocos2d for iPhone 1.0.0 API reference
CCMutableTexture
Fast set/getPixel for an opengl texture?
[render texture] pixel manipulation (integrated CCMutableTexture functionality)