Accessing CCSprite From Different Scene - objective-c

So basically what I am trying to do is when the player of my game completes a level (for example level 1), it switches scenes back to the level select scene, and swaps the sprite picture of level 1 to a different one (for example one that has a check mark over it). I can replace the scene but I don't know how to change the sprite in the new scene, specifically when the scene change occurs after the level is completed. So I am assuming I would use a singleton class, am I right? If so, how would I go about using it?

Singletons are ok, don't be afraid to use them. Many components of cocos2d are singletons.
I think what you need is some sort of structure that keeps track of the state of the game. (How many levels are completed/What should be the next level/etc). When your level select scene is loaded it should look up that 'state of the game' object (be it a singleton, plist, etc ) and displays itself accordingly.
I would stay away from passing information directly from one scene to another, this makes reordering them a headache later on.

First, let me make sure I understand the problem correctly.
You have a scene (A) with a sprite in it.
You transition to another scene (B) for the game play.
The game ends and you transition back to scene A.
When scene A redisplays, you want to change the image displayed by the sprite.
If I've got this right, then regardless of whether singletons are good or bad, you don't need one for this.
If, like me, you've created your sprite using a display frame from the CCSpriteFrameCache, then you can simply change the frame you want the sprite to use when "A" is redisplayed.
Some sample code demonstrating this can be seen in another question:
How to switch the image of a CCSprite
(Certainly, if I've got this right, then feel free to just dupe this)

Related

Players and deck methods in a card game

I'm programming a card game (of Uno / Mau Mau type) and I have this design problem:
The deck contains two stacks of cards, one of them shows faces, the other shows backs. When a game is in progress and a player throws a card, it should go onto the "faces" stack. However, when the game is finished, the cards of the last player should go back onto the "backs" stack.
1) Should the deck have two methods for adding cards (addToFacesStack and addToBacksStack)
or
2) Should the deck have an addCards method and decide itself which stack the cards should go onto (the deck would have to know the state of the game - in progress/finished)?
Also, when the game is in progress and a player (who knows the rules and selects cards to play accordingly) throws card(s) onto the "faces" stack, should the deck "re-check" whether the player's move is valid?
Thanks in advance for your suggestions!
Caroline
I think you should be asking the question: should the model know about the game logic or just the state of the game?
If YES, then you need to include the game logic inside your model, hence you can have only the addCards method, and the Deck will decide where to add the card(s). But, in this case the game model and game logic are tightly coupled. If you were to use the same model for another variation of the game (with a different logic), this option would not be appropriate.
If NO, then you can follow the Boundary - Control - Entity design pattern. Here, you need to have separate methods for adding cards to the first or second stack. And encode the game logic into your controller objects, which would know the rules of the game. Using this pattern, you can reuse the same model and just employ different controllers based on the game being played.
Regarding your question:
Also, when the game is in progress and a player (who knows the rules and selects cards to play accordingly) throws card(s) onto the "faces" stack, should the deck "re-check" whether the player's move is valid?
In this case, you can have a controller that will check whether the move is valid or not. No need to encode the logic inside the model.

Hooking up Chipmunk bodies to UIKit components?

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

OOP design preference regarding parent/child interaction

I am working on a music notation app which has is a music staff class (CCNode) and a note class (CCSprite).
Notes are added to the music staff like:
// MusicStaff.m
[self addChild:note];
Notes have a particle emitter, and this needs to be added to the parent.. I am of the opinion that doing something like:
// Note.m
[self.musicStaff addChild:self.emitter];
is not cool because I don't like the idea of notes controlling the staff--- I like to think of the staff as the one in control of what children it has.
I honestly feel like this particle emitter should be a child of Note, since it technically is part of the note, not part of the music staff-- so adding it to the music staff inherently feels wrong. However, from what I understand about cocos2d, although you can add a child to a CCSprite, the sprites do not manage the drawing of their children, so this particle emitter would not be visible.
That said, since as far as I know the only way to go about this is to add the emitter to the staff, I would prefer doing:
// MusicStaff.m
[self addChild:note];
[self addChild:note.emitter];
However, a team member on my project feels this is "backwards" and "dumb", and that the note should add the emitter directly to its parent. I just seeking some feedback as to if my thoughts on this are indeed "backwards" and "dumb", or if I have a valid point…
Also I am curious if there is another way to solve this problem, like adding the emitter directly to the note and making it draw its children somehow?
In terms of object-oriented design, if the Emitter is created by its parent the Note, I don't think it should add itself to the Staff. If anyone has to talk to the Staff, let it be its direct child, the Note. Even better, make the Note respond to questions asked by the Staff, so in the end the Staff controls what it wants to show.
You can add the particles as children of the sprite and they will be drawn. Whatever resource gave you the idea that child nodes are not drawn is wrong.
What I think you may have misunderstood is the issue of sprite batching. In that case, when you do use CCSpriteBatchNode, you can only add CCSprite objects to the batch node and the batch node's children. So in that case trying to add a particle effect or any other node as child of a sprite-batched sprite will cause an assertion in cocos2d telling you that this is illegal.
As for the "issue of dumbness": Neither option is really dumb, but adding the emitter to the parent has a minor benefit in that the note takes control over what is inherently the note's responsibility: managing the lifetime of the note's particle effect.

Fundamental Drag And Drop In iOS

I've been considering an app now that implements a drag and drop sort of idiom from maybe a side pane or a drawer, etc. what I can't wrap my head around are how to keep reference to the objects I drop. I mean; it would be easy if it was just drop the object, then let it alone, but I want more manipulation after the fact.
My brain just cannot wrap around the concept of creating objects out of thin air to place on the 'canvas', or having preset objects (which I imagine would be limited, cumbersome and awkward) already on the canvas that would then just be activated and manipulated easily, seeing as the references to them are created before the fact (my apologies for the loose term 'reference', I mean something like selecting the object and having it's unique properties recognized or displayed).
There must be something I'm missing. So, I wonder how one might go about implementing drag and drop with interface and manipulation with the dropped object after the fact or maybe sample code or a link to a git or svn repo? (something like how MIT's scratch, or Xcode's interface builder might work).
For clarity's sake, I know how to go about fiddling with drag and drop thanks to DragKit, but not about editing 'properties' on the object dropped onto the 'canvas', and I would like there to be a near infinite amount of objects that can be dropped on the canvas, yet a set amount of items in the drawer/side view.
If I'm understanding your question correctly, you want to be able to drag objects onto a canvas and then manipulate their properties individually. For instance, you'd drag square views onto the screen and then increase its size or change its color.
In order to do something like that, I would have a NSMutableArray or an NSMutableSet that would hold all of the on canvas objects. Then when any interaction comes, you could either dynamically generate gesture recognizers if the objects are UIViews or a subclass. Then in the target of the gesture recognizer you would use the recognizer.view property.
Or you would have to check which object on the canvas you were currently manipulating. That would be done by iterating through the array and seeing which object equals the one you are touching.
Is there anything that you are trying to do but is not working? Have you written any code in attempt to do this?

How to keep model & controller separate from a CALayer based UI?

I'm trying to re-implement an old Reversi board game I wrote with a bit more of a snazzy UI. I've looked at Jens Alfke's GeekGameBoard code for inspiration, and CALayers looks like the way to go for implementing the UI.
However, there is no clean separation of model and view in the GeekGameBoard code; the model is the view, which makes it hard to, for example, make a copy of the game state in order to perform game-tree search for the AI player. However, I don't seem to be able to come up with an alternative way to structure that allows a separation of model and view that doesn't involve a constant battle to keep two parallel grids (on for the model, one for the view) in synch. This, of course, has its own problems.
How do I best best implement the relationship between an AI search-friendly model structure and a display-friendly view? Any suggestions / experiences would be appreciated. I'm dreading / half expecting an answer along the lines of "there is no good answer: deal with it as best you can" but I'm prepared to be surprised!
Thanks for the answer Peter. I'm not entirely sure I understand it fully, however. I can see how this works if you just have an initial set of pieces that are moved around, and even removed, but what happens when a person puts a new piece down? Would it work like this:
User clicks in the view.
View click is translated to a board location and controller is notified.
Controller creates a new Board with the successor state (if appropriate, i.e. it was a legal move).
The view picks up the new board via its bindings, tears down the existing view/layer hierarchy and replaces it with the current state.
Does that sound right?
PS: Sorry for failing to specify whether it was for the iPhone or Mac. I'm most interested in something that works for the iPhone, but if I can get it to work nicely on the Mac first I'm sure I can adapt the solution to work on the iPhone myself. (Or post a new question!)
In theory, it should be the same as for an NSView-based UI: Add a model property (or properties), expose it (or them) as bindings, then bind the view (layer) to the model through a controller.
For example, you might have a Board class with Pieces on it (each Piece having a reference to the Player who owns it), with all of those being model classes. Your controller would own a Board, and your view/layer would be able to display a Board, possibly with a subview/sublayer for each Piece.
You'd bind your board view/layer to the controller's board property, and in your view/layer's setter for that property, create a subview/sublayer for each piece, and bind it to any properties of the Piece that it will need. (Don't forget to unbind and remove all the subviews/sublayers when replacing the main view/layer's Board.)
When you want to move or modify a Piece, you'd do so using its own properties; these will translate to property accesses on the view/layer. Ostensibly, you'll have your layer's properties set up to animate changes (so that, for example, changing a Piece's position will cause the layer for it to move accordingly).
The same goes for the Board. You might let the user change one or both tile colors; you'll bind your color well(s) through your game controller to its Board object, and with the view/layer bound to the same property of the same Board, it'll pick up the change automatically.
Disclaimers: I've never used Core Animation for anything, and if you're asking about Cocoa Touch instead of Cocoa, the above solution won't work, since it depends on Cocoa Bindings.
I have an iPhone application where almost all of the interface is constructed using Core Animation CALayers, and I use a very similar pattern to what Peter describes. He's correct in that you want to treat your CALayers as if they were NSViews / UIViews and manage their logic through controllers and data via model objects.
In my case, I create a hierarchy of controller objects which also function as model objects (I may refactor to split out the model components). Each of the controller objects manages a CALayer, so there ends up being a parallel CALayer display hierarchy to the model-controller one. For my application, I need to perform calculations for equations constructed using this hierarchy, so I use the controllers to provide calculated values from the bottom of the tree up. The controllers also handle user editing events, such as the insertion of new suboperations or deletion of operation trees.
I've created a layer-hosting view class that allows the CALayer tree to respond to touch or mouse events (the source of which is now available within the Core Plot project). For your boardgame example, the CALayer pieces could take in the touch events, and have their controllers manage the back-end logic (determine a legal move, etc.). You should just be able to move pieces around and maintain the same controllers without tearing everything down on every move.