Changing game size in Phaser dynamically - resize

I'm building a game in which I can enter a building, which I'm handling via states. In other words, when my character overlaps with the door, the program starts a new state in which the interior of the building is built. Now, I want the interior to have smaller dimensions than the world, so I want to change the game size when I start this new state.
I tried this:
create: function()
{
//game size was 1000x700, I want to scale it to 700x450
game.width = 700;
game.height = 450;
//rest of creation code...
}
I also tried things like changing camera bounds, world bounds, world size, but method above shows the most promise.
The problem, however, is that the resize for some reason does not show until I click away from my browser tab and back. Calling game.width in the console yields 700 at all times, but it doesn't show it as such until tabbing out and back.
On top of that, the contents of the game (floor, furniture) are scaled down when the game resizes, defeating the purpose. I don't understand why it would, since there's no scaling anywhere in my code.
Any help would be greatly appreciated.

Edit:
I just saw that you were the one asking the question I linked to, so I guess the question is solved :D
I had a similar problem some time ago. As you described yourself: what you are doing is a resize! So the game resizes itself which means the same as re-scaling everything in the game.
if you want to cut the game smaller you can simply use
game.scale.setGameSize(700, 450);.
(see this post if you need more information)
As additional information: I later had problems with cutting the game size equally on all sides, so if you should come to face the same problem, have a look at this post

Related

could someone tell me why everything vibrates when the camera in my game moves?

I'm not sure why, but for some reason whenever the camera in my game moves, everything but the character it's focusing on does this weird thing where they move like they should, but they almost vibrate and you can see a little trail of the back of the object, although it's very small. can someone tell me why this is happening? here's the code:
x+= (xTo-x)/camera_speed_width;
y+= (yTo-y)/camera_speed_height;
x=clamp(x, CAMERA_WIDTH/2, room_width-CAMERA_WIDTH/2);
y=clamp(y, CAMERA_HEIGHT/2, room_height-CAMERA_HEIGHT/2);
if (follow != noone)
{
xTo=follow.x;
yTo=follow.y;
}
var _view_matrix = matrix_build_lookat(x,y,-10,x,y,0,0,1,0);
var _projection_matrix = matrix_build_projection_ortho(CAMERA_WIDTH,CAMERA_HEIGHT,-10000,10000)
camera_set_view_mat(camera,_view_matrix);
camera_set_proj_mat(camera,_projection_matrix);
I can think of 2 options:
Your game runs on a low Frames Per Second (30 or lower), a higher FPS will render moving graphics smoother (60 FPS been the usual minimum)
another possibility is that your camera is been set to a target multiple times, perhaps one part (or block code) follows the player earlier than the other. I think you could also let a viewport follow an object in the room editor, perhaps that's set as well.
Try and see if these options will help you out.
If your camera is low-resolution, you should consider rounding/flooring your camera coordinates - otherwise the instances are (relative to camera) at fractional coordinates, at which point you are at mercy of GPU as to how they will be rendered. If the instances themselves also use fractional coordinates, you are going to get wobble as combined fractions round to one or other number.

Kivy: Depth Oder not so in Depth

Now I could be wrong about this but after testing it all day, I have discovered...
When adding a widget and setting the z-index, the value "0" seems to be the magic depth.
If a widget's Z is at 0, it will be drawn on top of everything that's not at 0, Z wise.
It doesn't matter if a widget has a z-index of 99, -999, 10, -2 or what ever... It will not appear on top of a widget who's z-index is set to 0.
It gets more strange though...
Any index less than -2 or greater than 2 seems to create an "index out of range" error. Funny thing is...when I was working with a background and sprite widget, the background's Z was set to 999 and no errors. When I added another sprite widget, that's when the -2 to 2 z-index limitation appeared.
Yeah I know...sounds whacked!
My question is, am I right about "0" being the magic Z value?
If so, creating a simple 23D effect like making a sprite move being a big rock will take some unwanted code.
Since you can only set Z when adding,a widget, one must remove and immediately add back, with the new Z value...a widget.
You'll have to do this with the moving sprite and the overlapping object in question. Hell, I already have that code practically written but I want to find out from Kivy pros, is there a way to set z-index without removing and adding a widget.
If not, I'll have to settle for the painful way.
My version of Kivy is 1.9.0
What do you mean by z-order? Drawing order is determined entirely by order of widgets being added to the parent, and the index argument to add_widget is just a list index at which the widget will be inserted. The correct way to change drawing order amongs widgets is to remove and add them (actually you can mess with the canvases manually but this is the same thing just lower level, and not a better idea).
I found a working solution using basic logic based on the fact widgets have to be removed and added again in order to control depth/draw order.
I knew the Main Character widget had to be removed along with the object in question...so I created a Main Character Parent widget, which defines and control the Main Character, apart from its Graphic widget.
My test involves the Main Character walking in front of a large rock, then behind it...creating a 23D effect.
I simply used the "y-" theory along with widget attach and detach code to create the desired effect.
The only thing that caught me off guard was the fact my Graphic widget for my Actor was loading textures. That was a big no no because the fps died.
Simple fix, moved the texture loading to the Main Character Parent widget and the loading is done once for all-time.
PS, if anyone knows how to hide the scrollbars and wish to share that knowledge, it'll be much appreciated. I haven't looked for an API solution for it yet but I will soon.
Right now I'm just trying to make sure I can do the basic operations necessary for creating a commercial 23D game (handhelds).
I'm a graphic artist and web developer so coming up with lovely visuals won't be an issue. I'm more concerned with what'll be "under the hood" so to say. Hopefully enough, lol.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

How to code a random movement in limited area

I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.

Objective-C draw a path and detect when it closes (forms a closed shape)

I'm fairly new to game programming (but not to programming) and I want to create a space ship which leaves a trail on the screen. Now my problem is to come up with a solution how to detect if the trail left from the ship forms a closed shape - eg. if the ship left a trail around an object, the object is caught inside its trail so to speak.
The direction I'm thinking is to draw the path of the trail on an image not visible on the screen and every now and then try to fill it with certain color and then check if fill is caught within the trail path. However it seems like a lot of overhead.
Any ideas how to do that? I'm using cocos2d if that's of any help
In game programming you often need to think more mathematically than visually.
First does your ship continuously leaves a trail on the screen? If yes, then it will be easier to know when the shape closes : you just have to remember the coordinate where your ship started to leave a trail, then wait for the trail to approach this coordinate another time (for example within a radius of 10 pixels, or else the user will need to be really accurate to hit exactly the same pixel to close the shape).
The visual representation of the trail is only here for the user, you'll never use it to compute anything. What you will do is to keep in memory the path followed by the ship's trail : a polygon, which is nothing else than the list of coordinates it followed.
Then after you know that your shape is closed, you have to determine if an object is inside your polygon or not. It's possible that objective-c or cocos2d (I don't know much about it) already contains a built-in function to know if a point is inside a polygon. In java there is the Polygon class which makes this really easy. If you don't find anything you can do it yourself, there are already great answers about this subject on SO, here is a nice one : How can I determine whether a 2D Point is within a Polygon?