How to create the land (hills) like iOS game Contre Jour? (Using Box2d and OpenGL)
My ideas:
Physics (Box2d)
I think we have array of bodies or fixture.
When we to touch screen, determine touch location.
If the touch location is not far from land, we begin to scan the array of bodies, and are looking for a body with coordinates closest to touch Location.
When case a touch Move, move the right body to a new coordinate (body->SetTransform(...)).
What do you think, efficient to use a large number of bodies? And find for the right body by coordinates?
Graphics (OpenGL)
There is an array of vertices and triangles created by drawing the land (hills)?
Is this true?
You can use the function b2World::QueryAABB to get a list of the fixtures in a given area, then check those for the best option. The Box2D testbed does this to find out which fixture to grab with the mouse so you could check out that source code. See also: http://www.iforce2d.net/b2dtut/world-querying
To move the body you can indeed use SetTransform, which would be good if the object does not need to interact with anything along the way. Another option might be to SetLinearVelocity to a velocity that will move the body to the dragged-to point in one time step. This is a better method if you want a continuous drag with the object being able to bump into things as it moves, because it does not teleport the body instantly to the finger position. If the body is a bullet body then it also prevents the user from dragging things through other objects, eg a static wall. Remember to set the velocity to zero when the finger is lifted :)
Related
There is a big scene,eg:a house ,I want to use TapGes to move camera to see different rooms. Now I have two questions:
1.I can not get the 3D point from this tapPoint in the scene
Is there any other way?
Here are some examples that may help.
57018359 - this post one tells you how to touch a 2d screen (tap) and translate it to 3d coordinates with you deciding the depth (z), like if you wanted to tap the screen and place an object in 3d space.
57003908 - this post tells you how to select an object with a hitTest (tap). For example, if you showed the front of a house with a door and tap it, then the function would return your door node provided you name the node "door" and took some kind of action when it's touched. Then you could reposition your camera based on that position. You'll want to go iterate through all results because there might be overlapping or plus Z nodes
55129224 - this post gives you quick example of creating a camera class. You can use this to reposition your camera or move it forward and back, etc.
For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/
I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.
I'm fairly new to game programming (but not to programming) and I want to create a space ship which leaves a trail on the screen. Now my problem is to come up with a solution how to detect if the trail left from the ship forms a closed shape - eg. if the ship left a trail around an object, the object is caught inside its trail so to speak.
The direction I'm thinking is to draw the path of the trail on an image not visible on the screen and every now and then try to fill it with certain color and then check if fill is caught within the trail path. However it seems like a lot of overhead.
Any ideas how to do that? I'm using cocos2d if that's of any help
In game programming you often need to think more mathematically than visually.
First does your ship continuously leaves a trail on the screen? If yes, then it will be easier to know when the shape closes : you just have to remember the coordinate where your ship started to leave a trail, then wait for the trail to approach this coordinate another time (for example within a radius of 10 pixels, or else the user will need to be really accurate to hit exactly the same pixel to close the shape).
The visual representation of the trail is only here for the user, you'll never use it to compute anything. What you will do is to keep in memory the path followed by the ship's trail : a polygon, which is nothing else than the list of coordinates it followed.
Then after you know that your shape is closed, you have to determine if an object is inside your polygon or not. It's possible that objective-c or cocos2d (I don't know much about it) already contains a built-in function to know if a point is inside a polygon. In java there is the Polygon class which makes this really easy. If you don't find anything you can do it yourself, there are already great answers about this subject on SO, here is a nice one : How can I determine whether a 2D Point is within a Polygon?
For those of you who have used bullet physics...
I read and ran the hello world example http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Hello_World,
and I am confused where to go next.
The hello world tutorial consisted of a btStaticPlaneShape and a btSphereShape, both rigid bodies. The sphere bounced on the static plane shape no problem.
However, I when I make another sphere at a different position, Bullet does not record collisions between the two sphere shapes, but it both automatically bounce off of btStaticPlaneShape. What kind of internal magic causes the btStaticPlaneShape to automatically bounce objects that collide with it?
Is there a setting in Bullet that automatically bounces objects off of each other after colliding? Or do you have to manually test for collisions and apply the resulting forces yourself?
Thanks.
You may have inadvertently created the spheres in a state where Bullet doesn't think they're supposed to be able to collide with each other. If you stick mostly to the defaults, and just add another sphere to the Hello World program, Bullet should notice and react to their collisions. They won't actually bounce unless you also modify Hello World to set their restitution to something greater than zero, but they will collide. For example, I added a second sphere directly above the first (by putting a for loop around the code block that creates the sphere, and using the loop variable to determine the origin y value) and extended the simulation so it runs long enough for them both to reach the plane. The first lands on the plane and rests there, the second lands on the first and rests there.
If this doesn't help, then posting some of your code is probably a good next step.