I was wondering how in Games when the character goes forward the screen moves with them and the level continues on, past the extent of a storyboard (Like the background is longer then the storyboard. Does anyone know how? (Not using a game engine)
I was wondering how in Games when the character goes forward the screen moves with them and the level continues on, past the extent of a storyboard
The objects on the game board generally aren't laid out in a storyboard file. The view that is the game board could certainly be part of the storyboard, but the content that that view draws is typically generated by the game's code. For example, it's common to use a sprite framework like SpriteKit or Cocos2D to draw objects using sprites. The data that the game manages is the collection of objects that appear in the game, and a sprite might be used to represent each of those objects.
Consider a platform jumping game like Doodle Jump, where the player hops from one platform to the next. The platforms keep coming and coming and coming as long as the player doesn't fall. Those platforms aren't laid out visually in the storyboard editor. Instead, a list of platform positions is somehow generated -- they could be read from a file, or calculated using some function, or create by some algorithm. A sprite is created for each platform as needed to fill the screen, and probably destroyed after it scrolls off the screen.
Related
I am just a beginner in game developing. Right now I am developing a game using Apple's Sprite kit and found out that the best way to position nodes on the scene is to provide percentages of width and height of the window boundaries as it makes sure all the nodes maintain their positions almost regardless of change in device display size. Using pixels to position nodes is not a peculiar idea as with the change in device display size of iPhone, nodes are either cut off or the scene squeezes leaving empty space around the scene boundary. I have watched how Apple recommend using scene editor but my issue is, using scene editor it allows you to position nodes by choosing pixels and not relative to to scene width or height. Am I making a mistake in understanding the scene editor capability. If I position all my nodes using scene editor as its saves a lot of time, how can I avoid problem with different iPhone sizes. I appreciate your help.
This is an age old problem, from all media formats.
You must decide, personally, what your favourite and most desirable target device is, and then make choices best for both it, and your creative process.
After making that decision you'll have to make your own decisions on how compromised you become on other devices, or how much you compromise your creative and production processes to benefit other device sizes and aspect ratios.
It's a balancing act.
And I strongly suggest favouring your favourite device and putting off all consideration of adaptation to other devices until after you've made something great.
Others will disagree.
I am currently struggling with the following problem:
I am creating a Sprite Kit game in Objective-C, in which I have to use a parallax-animation in all 3 scenes. I use clouds with randomly generated X and Y positions. They appear in the greeting scene, in the game scene and in the highscore scene.
Problem: whenever I switch from one scene to another, I have to restart the parallax-animation, which leads to an messy interruption.
What might be the best strategy to keep the "cloud scene" running all the time in the background, no matter how many times you switch between the game scenes?
Thanks in advance.
For this purpose, consider using only one scene and for each game theme (i.e greeting, game, highscore) use an SKNode which will contain the required elements for its purpose (e.g. the greeting node will have its greeting elements).
This way you can easily keep a "constant" SKNode (i.e. add it once to the scene when the game is first loaded) which will contain your parallax clouds and add/remove required node for the greeting game and highscore when necessary
I am working on a tilemap based game in cocos2d in which the player moves in four directions and I have used four images for the movement of player for example left,right,top and down. My problem is that when my background map change its position or move to other position then my sprite does not change its position. Can anyone tell me how to move a sprite with the movement of background.
Use a CCNode to contain both the background and the sprites for your players. Instead of moving the background, move that node.
There are a couple of ways to handle tilemap based games, and neither of them are very convenient. One way is to leave your character in the center of the screen at all times and move the background underneath it. If your character moves 'right', you simply slide the background to the left, and vice versa. This will give the illusion that the character moves around the map, when in reality it remains centered. Under this paradigm you must remember to convert all detection / collisions into the world's space, and not just the screen space. If you don't convert everything, then your 'range' of collision / detection is limited to the size of the screen.
The second method is to pan the camera over the world. You still keep the character in the middle of the screen, but it actually moves around in the world, and the camera follows. This makes the most intuitive sense to me because it allows you to view the game world as you see the real world. It is also much easier to deal with collisions because the position of the character and the world 'just work' and don't have to be converted. The downside here is that Cocos2D doesn't make it easy to use CCCamera, and the documentation is a little thin in that respect.
In your particular case, it sounds like you have a CCLayer problem. If your character is inside the layer you are moving, it will indeed remain in the same place relative to the map (as you are describing). Instead, float the character in a different layer on top of the map.
You could use a scrolling Parallex and then add the sprite onto the same layer as the background. They will move together.
I am pretty new to making games, but I am pretty familiar with programing iOS. I am creating a shape matching game, so there would be an array of different shapes and the user would drag the shape to the correct corresponding shape if they get it right it would stay and if they get it wrong it would shoot back. Now my question is would that be easier using cocso2d or any game engine or would it be just as easy not using one, just using a touch event?
Since the game you are describing is not graphically intense - I would recommend using UIKit. Couple of reasons why I would use UIKit over cocos2d:
Interface builder / Storyboards are awesome. You can lay out your
screens and game elements on screen. (I know tools exist to do this
using cocos like CocosBuilder, but IMO they just don't compare to
working directly in XCode)
UIKit animations couldn't be easier and you can do some pretty powerful things with minimal code.
You have direct access elements such as UITableView, UICollectionView, UIScrollView, etc. There are cocos nodes that mimic these, but they don't match up in terms of response and behavior.
For more graphically intense games I would still use cocos2d hands down. Some scenarios when you would use it:
You have a large number of sprites with a large number of animations (opengl is fast)
You want to use opengl based effects like particles, lighting, etc.
You need a physics engine
You want to work off a prebuilt game engine (there are tons such as levelsvg, kobold2d, line starter kit, etc)
Hope this helps you.
For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/