I have shapes (Rectangle) in my game and want to implement something like -
when the shape object is pressed for small amount of time and pushed in any direction it should move small distance but pressing the shape for longer time it should be moved to large distance ( means depending on the pressure put on the shape and when it is thrown it should move distance relative to pressure applied.
Regards
You can break the problem into two pieces:
While the object is being pressed, it accelerates (so the longer it is pressed the greater the speed it gets up to).
As it travels, it decelerates at a constant rate (so the faster it's going at the beginning, the longer it keeps moving, and the farther it moves before it stops).
Now all you have to do is implement velocity and accelaration, then pressure and drag.
If this approach doesn't give the appearance you want, there are ways to modify it.
Related
I need to change the mouse speed on a Picture Box in my VB paint program.
Is there a way to do this specifically for the object without changing the system's settings?
Rather than change the speed, you could modify the distance the pointer moves for each mouse event. This may not be a permanent solution, and if the UI thread is otherwise busy it could lead to jerky movement, but the technique is simply enough you could code it in less time than I took to write this paragraph.
In your event handler for MouseMove, get the distance (X,Y) the mouse pointer has moved.
Set Cursor.Position using some scaled value (nX, nY), where n determines the distance (and hence speed) the mouse pointer moves.
In other words, you'll be treating the mouse move increment as a vector, and you'll be multiplying that vector by a scalar value to modify the speed.
http://msdn.microsoft.com/en-us/library/system.windows.forms.cursor.position.aspx
As needed, use Boolean flags within the event handler to ensure that setting Cursor.Position does not trigger another even just because set Cursor.Position to a new point.
You might find that using the same scaling factor for all distances is inappropriate. For example, multiplying the move by a factor of 2 may be okay for short distances, but much too fast for large increments, in which case a lookup table or function would calculate the desired factor.
This technique would probably perform poorly if you need to decrease the speed of a move, as it could cause the mouse pointer to jump backward some fraction of the distance of its most recent move.
I'm looking for a simple Kinect app which allows me to a) detect and b) track a single moving object in an otherwise static background.
I don't need any fancy skeleton or other features, just the center of mass of the moving object will do it.
Any pointers?
I would see Comparing a saved movement with other movement with Kinect to track the entire body. The answer shows the code here which shows how to save skeleton data. And mapping an ellipse to a joint in kinect sdk 1.5 to have the tracking of joints if you want to track the joints not the entire body (currently works better, but when the tracking the entire body works, use that because it is more effective and efficient).
your case is pretty simple, but requires initialization for the object since in general a term "object" is ill-defined. It can be a closest object or moving object or even the object that was touched, has certain color, size or shape.
Let's assume that you define object by motion that is whatever moves in your point cloud is an object. I suggest to do this:
Object detection is easy if object moves more than its size since
then you just may subtract depth maps and end up with your object:
depth1-depth2 > T but if the object moves slowly and shifts only by a
fraction of its size you have to use whatever high frequency info you
have, which can be depth or colour or both. It is going to be noisy as the figure below shows
as soon as you have your object selected you may want to clean it by running some morphological filters (erode +
dilate) to erase noise and get a single blob. After that you just
need to find some features in the blob such as average depth or mean
color and look for them in a small window around the object's previous
location in order to rediscover the object;
finally don't forget to update these features as object moves
through.
Some other ideas you may want to use are: depth gradient, connected components in depth, pre-recording background depth for cleaner subtraction, running grabCut on depth area selected by mouse click, etc.
I'm trying to make a little archer game, and the problem I'm having has to do with 2 pixels in particular, I'll call them _arm and _arrow. When a real archer is pulling back an arrow, he doesn't immediately pull the arrow back as far as his strength allows him, the arrow takes a little bit of time to be pulled back.
The _arm's angle is equal to the vector from a point to where the user touched on the screen. The rotation is perfect, so the _arm is good. The _arrow needs to be on the same line as _arrow, they are 1 pixel wide each so it looks as though the _arrow is exactly on top of the _arm.
I tried to decrement from the x/y coordinates based on a variable that changes with time, and I set the _arrow's location equal to the _arm's location, and tried to make it look like the _arrow was being pulled back. however, if you rotated, the x/y would mess up because it is not proportional on the x and y axis, so basically _arrow will either be slightly above the arm or slightly below it depending on the angle of the vector, based on touch.
How could I used the x/y position of _arm and the vector of touch to make the arrow appear as though it was being pulled back by a small amount, yet keep the arrow on top of the _arm sprite so that it's position would be similar to the arm, but slightly off yet still on top of the _arm pixel at all times. If you need anymore info, just leave a comment.
I'm not sure I've fully understood, but I'll have a go at answering anyway:
To make the arrow move and rotate to the same place as the consider adding the arrow as a child of the arm. You can still render it behind if you like by making its z is less than one: [arm addChild:arrow z:-1]
To then make the arrow move away from the arm as the bow is drawn, you then just set the position of the arrow with respect to the arm.
The problem I do see with this solution however is that this grouping of the sprites may be a little unusual after the arrow leaves the bow. Here you probably don't want the arrow to be a child of the arm as the coordinate systems are no longer related.
Even though they're sure what I "suggested would have solved [the] problem" here is the
Poster's solution
I had to get the x and y coords of the arm based of angle, then I got the sin/cos of a number that was based of the same angle as the arm and subtraced from that.
I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.
I am implementing a view similar to a Table View which contains rows of data. What I am trying to do is that after scrolling, each row snaps to a set of correct positions so the boundaries of the top and bottom row are completely visible - and not trimmed as it normally happens. Is there a way to get the scroll destination before the scrolling starts? This way I will be able to correct the final y-position, for example, in multiples of row height.
I asked the same question a couple of weeks ago.
There is definitely no public API to determine the final resting Y offset of a scroll deceleration. After researching it further, I wasn't able to figure out Apple's formula for how they manage deceleration. I gathered a bunch of data from scrolling events, recording the beginning velocity and how far the deceleration traveled, and from that made some rough estimates of where it was likely to stop.
My goal was to predict well in advance where it would stop, and to convert the deceleration into a specific move to an offset. The problem with this technique is that scrollRectToVisible:animated: always occurs over a set period of time, so instead of the velocity the user expects from a flick gesture, it's either much faster or much slower, depending on the strength of the flick.
Another choice is to observe the deceleration and wait until it slows down to some threshold, then call scrollRectToVisible:animated:, but even this is difficult to get "just right."
A third choice is to wait until the deceleration completes on its own, check to see if it happened to stop at your desired offset multiple, and then adjust if not. I don't care for this personally, as you either coast to a stop and then speed up or coast to a stop and reverse direction.