what ratio, from pixel to meter will be best and preferable? - objective-c

I'm using following ratio for pixel to meter conversion,
PTM_RATIO=32;
v3BodyDef.position.Set(2848/PTM_RATIO, 102/PTM_RATIO);
This this produce weird output many times on the screen, so does setting position(v3BodyDef.position.Set) take floating point variable or not I don't know, but I think this conversion making trouble.
Please help me with this.
Thank you.

There isn't a recommendable ratio for that (though some will try and convince you there is).
The scale of objects in your physics engine should depend on the average scale of your dynamic objects. What I mean is, that if your player interacts with a lot of objects "slightly larger" and "slightly smaller" than itself, it's probably best to make player an average size in the optimal range (for example, Box2D is optimized for objects between 0.1m and 10m in size, so make player 1m, or 1.5m).
As for your pixel size, that all depends on how large you want your world to be on the screen.
If you want your hero to be 1/10th of the screen in height, and 2 meters away from the camera, then do the math :-p Others may want their here to be 1/8th of screen height, or 1/12th.. that really depends on how the game will look in the end. If the camera zooms in, the pixel to physics ratio would change. If the screen resolution changes (like a retina display), your pixel to physics ratio will have to change accordingly.
So in practice: there is no set value. It really depends on the game, and depends on what feels best for the hardware you're on.

It's most likely an integer division problem, change PTM_RATIO to a float (or if you are defining it, use #define PTM_RATIO 16.0f )

Related

Special Kind of ScrollView

So I have my game, made with SpriteKit and Obj-C. I want to know a couple things.
1) What is the best way to make scroll-views in SpriteKit?
2) How do I get this special kind of scroll-view to work?
The kind of scroll-view I'd like to use is one that, without prior knowledge, seems like it could be pretty complicated. You're scrolling through the objects in it, and when they get close to the center of the screen, they get larger. When they're being scrolled away from the center of the screen, they get smaller and smaller until, when their limit is met, they stop minimizing. That limitation goes for getting bigger when getting closer to the center of the screen, too.
Also, I should probably note that I have tried a few different solutions for cheap remakes of scroll views, like merely adding the objects to a SKNode and moving the SKNode's position relative to the finger's, and its movement . . . but that is not what I want. Now, if there is no real way to add a scroll-view to my game, this is what I'm asking. Will I simply have to do some sort of formula? Make the images bigger when they get closer to a certain spot, and maybe run that formula each time -touchesMoved is called? If so, what sort of formula would that be? Some complicated Math equation subtracting the node's position from the center of the screen, and sizing it accordingly? Something like that? If that's the case, will you please give me some smart Math formula to do that, and give it to me in code (possibly a full-out function) format?
If ALL else fails, and there is no good way to do this, what would some other way be?
It is possible to use UIScrollViews with your SpriteKit scenes, but there's a bit of a workaround involved there. My recommendation is to take a look at this github project, that is what I based my UIScrollView off of in my own projects. From the looks of it, most of the stuff you'd want has actually been converted to Swift now, rather than Objective-C when I first looked at the project, so I don't know how that'll fare with you.
The project linked above would result in your SKScene being larger than the screen (I assume that is why it would need to be scrolled), so determining what is and is not close to the center of the scene won't be difficult. One thing you can do is use the update loop in SpriteKit to constantly update the size of Sprites (Perhaps just those on-screen) based on their distance from a fixed, known center point. For instance, if you have a screen of width and height 10, then the midpoint would be x,y = 5,5. You could then say that size = 1.0 - (2 * distance_from_midpoint). Given you are at the midpoint, the size will be 1.0 (1.0 - (2 * 0)), the farther away you get, the smaller your scale will be. This is a crude example that does not account for a max or min fixed size, and so you will need to work with it.
Good luck with your project.
Edit:
Alright, I'll go a bit out of my way here and help you out with the equation, although mine still isn't perfect.
Now, this doesn't really give you a minimum scale, but it will give you a maximum one (Basically at the midpoint). This equation here does have some flaws though. For one, you might use this to find the x and y scale of your objects based on their distance from a midpoint. However, you don't really want two different components to your scale. What if your Sprite is right next to the x midpoint, and the x_scale spits out 0.95? Well, that's almost full-sized. But if it is far away from the midpoint on the y axis, and it gives you a y scale of, say 0.20, then you have a problem.
To solve that, I just take the magnitude or hypotenuse of the vector between the current coordinate and the coordinate of the current sprite. That hypotenuse gives me an number that represents the true distance, which eliminates the problem with clashing scale values.
I've made an example of how to calculate this inside Google's Go-Playground, so you can run the code and see what different scales you get based on what coordinate you plug in. Also, the equation used in there is slightly modified, It's basically the same thing as above but without the maxscale - part of the front part of the equation.
Hope this helps out!
Embedding Attempt:
see this code in play.golang.org

Make rectangle fall when being hit by ball (different outcomes depending on properties)

I've just got started with physics. I'm using Java, though language does not matter obviously. Now I though I'd do something like this:
A ball with a certain speed, radius and mass hits a rectangle with a certain mass, width and height. Depending on where the ball hits the rectangle (how high up), and all the properties the ball and the rectangle have that i just mentioned, there will be different outcomes of the situation.
These are the four possible outcomes:
The ball bounces back because the rectangle was too heavy
The rectangle starts to wobble, but then goes back to normal
The rectangle falls to the right
The ball strikes through making the rectangle fall to the left
Please note, I don't expect you to write a program for me. I understand it is a lot to think off. But I have no idea how to start. I would really appreciated some guide lines and links to further reading about this (I was not sure what to google to find info about this.)
And also, I'm doing this to learn, so don't tell me to use an engine or anything like that.
You are trying to build a simple physics simulator. This is a pretty involved problem, and you'll have to learn a certain amount of physics along the way.
I suggest you develop the simulator to handle these situations, roughly in this order:
An object moves through space (constant velocity, no gravity).
An object moves under the influence of a constant force (such as gravity).
An object moves with a constraint (e.g. a pendulum, a rolling square).
An object slides across a surface, with friction (both static and kinetic).
Two objects collide inelastically (they stick).
Two objects collide elastically (they bounce).
Once you have all of these, you will be able to simulate your ball and rectangle.

Sand Physics for iOS

What is the best way to make sand particles animate in a view?
Essentially, I would like to half fill the iOS device's screen with small sand-like particles, then allow a user to rotate and shake the device to dictate the sand's position.
Assuming I have never done any physics programming before, can anyone recommend a tutorial or show me how it's done?
Thank you,
Query.
UPDATE:
I have now come across this (mine should be 2D though) - how can I bring something similar into my app?
Using spatial indexing for finding the nearest-particles to check for collision and using an integration technique for the transition between force(acceleration)-velocity-position and using only gravity force as an outer-fource would give you your sand-box.
You will need to select a good exclusion-force derived from a particle-potential if you use post-collision detection.
I advise you to use the Truncated Lennar-Jones potential and Verlet-Integrator. Easier than Runge-Kutta's and more precise than Euler's. Because it is used in molecular-dynamics. You dont need to use other forces . Just use exclusion force, gravity and wall forces.
If you have bullets in your simulator, you can use Euler-Integration for them. I think this is acceptable for free-falling but not colliding sand particles. After they close each-other, it would be good to use Verlet or Runge-Kutta.
All i mentioned above assumes your integration step is so big that energy is not conserved and even decreased. If your integration is good enough to conserve energy, you will need to give your particles friction force to make sands slow or you will get your particles exploding everwhere.
If you like to make it on iphone then you have to think of certain optimizations and tricks as iPhone can't really simulate water or sand.
Your trick is that most of your work is to draw scene.
Create scene in Box2D with balls at size 10-20 times bigger then sand particle.
iPhone would be able to simulate it.
Then you should draw 10-20 sand particle per ball.
Every frame you may check if ball collides with other balls or not.
If balls is not colliding then these sand particles are in the air and you should draw them on certain distance one from each other.
If ball collides with other balls then particles should be rendered together
You may also detect margin and render glider sand border on top.

how to get faster rendering of 400+ polygons with SFML

I'm making a basic simulation of moving planets and gravitational pull between them, and displaying the gravity with a big field of green vectors pointing in the direction gravity is pulling them and magnitude of the strength of the pull.
This means I have 400 + lines, which are really rectangles with a rotation, being redrawn each frame, and this is killing my frame-rate. Is there anyway to optimize this with other than making less lines? How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
EDIT:
SFML does the actual rendering each frame, but the way I create my lines is by making a rectangle-like sf::Shape. The generation function takes a width, and sets point 1 as (0, width), point 2 as (0, -width), point 3 as (LineLength, -width), and point 4 (LineLength, width). This forms a rectangle which extends along the positive x-axis. Finally I rotate the rectangle around (0,0) to get it to the right orientation, and set the shapes position to be wherever the start of the line is supposed to be.
How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
I imagine by not drawing 400+ 4-vertex objects that are each rotated and scaled with a matrix.
If you want to draw a lot of these things, you're going to have to stop relying on SFML's drawing classes. That introduces a lot of overhead. You're going to have to do it the right way: by drawing lines.
If you insist on each line having a separate width, then you can't use GL_LINES. You must instead compute the four positions of the "line" and stick them in a buffer object. Then, you draw them with a single GL_QUADS call. You will need to use proper buffer object streaming techniques to make this work reasonably fast.
Large batches and VBOs. Also double-check how much time you're spending in your simulation update code.
Quick check: If you have a glBegin() anywhere near your main render loop you are probably Doing It Wrong.
Calculate all your vertex positions, then stream them into the GPU via GL_STREAM_DRAW. If you can tolerate some latency use two VBOs and double-buffer.

On-the-fly Terrain Generation Based on An Existing Terrain

This question is very similar to that posed here.
My problem is that I have a map, something like this:
This map is made using 2D Perlin noise, and then running through the created heightmap assigning types and color values to each element in the terrain based on the height or the slope of the corresponding element, so pretty standard. The map array is two dimensional and the exact dimensions of the screen size (pixel-per-pixel), so at 1200 by 800 generation takes about 2 seconds on my rig.
Now zooming in on the highlighted rectangle:
Obviously with increased size comes lost detail. And herein lies the problem. I want to create additional detail on the fly, and then write it to disk as the player moves around (the player would simply be a dot restricted to movement along the grid). I see two approaches for doing this, and the first one that came to mind I quickly implemented:
This is a zoomed-in view of a new biased local terrain created from a sampled element of the old terrain, which is highlighted by the yellow grid space (to the left of center) in the previous image. However this system would require a great deal of modification, as, for example, if you move one unit left and up of the yellow grid space, onto the beach tile, the terrain changes completely:
So for that to work properly you'd need to do an excessive amount of, I guess the word would be interpolation, to create a smooth transition as the player moved the 40 or so grid-spaces in the local world required to reach the next tile over in the over world. That seems complicated and very inelegant.
The second approach would be to break up the grid of the original map into smaller bits, maybe dividing each square by 4? I haven't implemented this and I'm not sure how I would in a way that would actually increase detail, but I think that would probably end up being the best solution.
Any ideas on how I could approach this? Keep in mind it has to be local and on-the-fly. Just increasing the resolution of the map is something I want to avoid at all costs.
Rewrite your Perlin noise to be a function of position. Then you can increase the octaves (and thus the detail level) and resample the area at a higher resolution.