cameraViewTransform and CGAffineTransformMakeScale - iphone-sdk-3.0

I'm tring to implement a digital zoom in an application and I use the following line to change the zoom factor (it can be called many time while the camera interface is displayed):
picker.cameraViewTransform = CGAffineTransformMakeScale(zoomFactor, zoomFactor);
It work perfectly the first time I display the camera inteface but not after that, the transform used by the camera is not the tranform I set. Any idea?

Not sure I understand exactly what you are doing but I can tell you that transforms are not accumulative unless you feed the existing transform in recursively.
For example, say you have a transform that rotates an object 45 degrees and you want to use it to spin the object. The first time you call it, it rotates the object 45 degrees but it doesn't rotate it any subsequent times. This is because your just setting the same exact transform over and over. A 45 degree transform is always the same.
To make the object rotate you have to call the 45 degree transform then you have to take the resulting transform from the first operation and rotate that by 45 degrees. Then take the results of that and rotate it 45 degrees.
You need to do something like:
picker.cameraViewTransform =CGAffineTransformScale(picker.cameraViewTransform, zoomfactor);
That way, your transforms will accumulate and you can zoom up and down.

This isn't so much an answer as a clue. Each time you bring the camera back to the front of the app (presumably using presentModalViewController:) this causes a new transform to be created at cameraViewTransform. The tricky thing is, it seems to take about a second or so for this process to complete, and I can find no delegate method to let us know exactly when the new transform is safely in place. In my app, I end up waiting for about 1 second and THEN modifying the cameraViewTransform to suit my needs. Hacky, but the only solution I've found so far...

Related

could someone tell me why everything vibrates when the camera in my game moves?

I'm not sure why, but for some reason whenever the camera in my game moves, everything but the character it's focusing on does this weird thing where they move like they should, but they almost vibrate and you can see a little trail of the back of the object, although it's very small. can someone tell me why this is happening? here's the code:
x+= (xTo-x)/camera_speed_width;
y+= (yTo-y)/camera_speed_height;
x=clamp(x, CAMERA_WIDTH/2, room_width-CAMERA_WIDTH/2);
y=clamp(y, CAMERA_HEIGHT/2, room_height-CAMERA_HEIGHT/2);
if (follow != noone)
{
xTo=follow.x;
yTo=follow.y;
}
var _view_matrix = matrix_build_lookat(x,y,-10,x,y,0,0,1,0);
var _projection_matrix = matrix_build_projection_ortho(CAMERA_WIDTH,CAMERA_HEIGHT,-10000,10000)
camera_set_view_mat(camera,_view_matrix);
camera_set_proj_mat(camera,_projection_matrix);
I can think of 2 options:
Your game runs on a low Frames Per Second (30 or lower), a higher FPS will render moving graphics smoother (60 FPS been the usual minimum)
another possibility is that your camera is been set to a target multiple times, perhaps one part (or block code) follows the player earlier than the other. I think you could also let a viewport follow an object in the room editor, perhaps that's set as well.
Try and see if these options will help you out.
If your camera is low-resolution, you should consider rounding/flooring your camera coordinates - otherwise the instances are (relative to camera) at fractional coordinates, at which point you are at mercy of GPU as to how they will be rendered. If the instances themselves also use fractional coordinates, you are going to get wobble as combined fractions round to one or other number.

Special Kind of ScrollView

So I have my game, made with SpriteKit and Obj-C. I want to know a couple things.
1) What is the best way to make scroll-views in SpriteKit?
2) How do I get this special kind of scroll-view to work?
The kind of scroll-view I'd like to use is one that, without prior knowledge, seems like it could be pretty complicated. You're scrolling through the objects in it, and when they get close to the center of the screen, they get larger. When they're being scrolled away from the center of the screen, they get smaller and smaller until, when their limit is met, they stop minimizing. That limitation goes for getting bigger when getting closer to the center of the screen, too.
Also, I should probably note that I have tried a few different solutions for cheap remakes of scroll views, like merely adding the objects to a SKNode and moving the SKNode's position relative to the finger's, and its movement . . . but that is not what I want. Now, if there is no real way to add a scroll-view to my game, this is what I'm asking. Will I simply have to do some sort of formula? Make the images bigger when they get closer to a certain spot, and maybe run that formula each time -touchesMoved is called? If so, what sort of formula would that be? Some complicated Math equation subtracting the node's position from the center of the screen, and sizing it accordingly? Something like that? If that's the case, will you please give me some smart Math formula to do that, and give it to me in code (possibly a full-out function) format?
If ALL else fails, and there is no good way to do this, what would some other way be?
It is possible to use UIScrollViews with your SpriteKit scenes, but there's a bit of a workaround involved there. My recommendation is to take a look at this github project, that is what I based my UIScrollView off of in my own projects. From the looks of it, most of the stuff you'd want has actually been converted to Swift now, rather than Objective-C when I first looked at the project, so I don't know how that'll fare with you.
The project linked above would result in your SKScene being larger than the screen (I assume that is why it would need to be scrolled), so determining what is and is not close to the center of the scene won't be difficult. One thing you can do is use the update loop in SpriteKit to constantly update the size of Sprites (Perhaps just those on-screen) based on their distance from a fixed, known center point. For instance, if you have a screen of width and height 10, then the midpoint would be x,y = 5,5. You could then say that size = 1.0 - (2 * distance_from_midpoint). Given you are at the midpoint, the size will be 1.0 (1.0 - (2 * 0)), the farther away you get, the smaller your scale will be. This is a crude example that does not account for a max or min fixed size, and so you will need to work with it.
Good luck with your project.
Edit:
Alright, I'll go a bit out of my way here and help you out with the equation, although mine still isn't perfect.
Now, this doesn't really give you a minimum scale, but it will give you a maximum one (Basically at the midpoint). This equation here does have some flaws though. For one, you might use this to find the x and y scale of your objects based on their distance from a midpoint. However, you don't really want two different components to your scale. What if your Sprite is right next to the x midpoint, and the x_scale spits out 0.95? Well, that's almost full-sized. But if it is far away from the midpoint on the y axis, and it gives you a y scale of, say 0.20, then you have a problem.
To solve that, I just take the magnitude or hypotenuse of the vector between the current coordinate and the coordinate of the current sprite. That hypotenuse gives me an number that represents the true distance, which eliminates the problem with clashing scale values.
I've made an example of how to calculate this inside Google's Go-Playground, so you can run the code and see what different scales you get based on what coordinate you plug in. Also, the equation used in there is slightly modified, It's basically the same thing as above but without the maxscale - part of the front part of the equation.
Hope this helps out!
Embedding Attempt:
see this code in play.golang.org

XNA model translation is bizzarre

When using Matrix.CreateTranslation(x,y,z) I get bizarre results. I have tested using fixed values, one variable at a time and have determined the following:
When altering the X coordinates, the model moves from the top left corner to the bottom right corner.
When altering the Y coordinates, the model moves up and down as it should.
I do not plan to alter the Z coordinates, but because of the nature of my program I can't figure out exactly what it does.
I have my model drawn. Rotation works fine. I am performing my translations in the correct order (at least I think): scale * rotation * translation.
I think the problem lies in my camera settings, but I have no idea exactly what the problem is. I am trying to create a top-down-style RTS camera.
Here are my camera settings:
campos = new Vector3(5000.0F, 5000.0F, 5000.0F)
effect.View = Matrix.CreateLookAt(campos, Vector3.Down, Vector3.Up)
I can provide more information as needed.
The second argument of Matrix.CreateLookAt is not the direction the camera is facing, but the targeted point.
If you try to make the camera look down, use
Matrix.CreateLookAt(campos, campos + Vector3.Down, Vector3.Forward)
This will tell the camera to always look at the point one unit below the camera.
Your translation probably doesn't work well because the camera is not looking at the point you want it to, and therefore looks like the model is moving diagonally.

How to code a random movement in limited area

I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.

Get size of an expanding circle in a CABasicAnimation at any point in time

I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.