I'm programming a card game (of Uno / Mau Mau type) and I have this design problem:
The deck contains two stacks of cards, one of them shows faces, the other shows backs. When a game is in progress and a player throws a card, it should go onto the "faces" stack. However, when the game is finished, the cards of the last player should go back onto the "backs" stack.
1) Should the deck have two methods for adding cards (addToFacesStack and addToBacksStack)
or
2) Should the deck have an addCards method and decide itself which stack the cards should go onto (the deck would have to know the state of the game - in progress/finished)?
Also, when the game is in progress and a player (who knows the rules and selects cards to play accordingly) throws card(s) onto the "faces" stack, should the deck "re-check" whether the player's move is valid?
Thanks in advance for your suggestions!
Caroline
I think you should be asking the question: should the model know about the game logic or just the state of the game?
If YES, then you need to include the game logic inside your model, hence you can have only the addCards method, and the Deck will decide where to add the card(s). But, in this case the game model and game logic are tightly coupled. If you were to use the same model for another variation of the game (with a different logic), this option would not be appropriate.
If NO, then you can follow the Boundary - Control - Entity design pattern. Here, you need to have separate methods for adding cards to the first or second stack. And encode the game logic into your controller objects, which would know the rules of the game. Using this pattern, you can reuse the same model and just employ different controllers based on the game being played.
Regarding your question:
Also, when the game is in progress and a player (who knows the rules and selects cards to play accordingly) throws card(s) onto the "faces" stack, should the deck "re-check" whether the player's move is valid?
In this case, you can have a controller that will check whether the move is valid or not. No need to encode the logic inside the model.
Related
So i am making a multiplayer game in unity and i want for players to be able to have a first person experience while other players can see a body walking around rather than a floating blob that is the default first person controller. So i take the scripts from the first person controller and i put them inside the third person controller. Except the camera is in the lower torso rather than the head, where i want it to be. Does anyone know how i can move it into the head? Preferably so that i don't see the inside of the head XD
If you open the hierarchy of your new "Fake first person controller", you may see plenty of child objects. Somewhere, you will find the camera.
Just take it now and bring it at desired heigth!
*I don't know how much changed in the charcontrollers since unity 3.x, but I'm pretty sure the philosophy will be pretty the same.
So basically what I am trying to do is when the player of my game completes a level (for example level 1), it switches scenes back to the level select scene, and swaps the sprite picture of level 1 to a different one (for example one that has a check mark over it). I can replace the scene but I don't know how to change the sprite in the new scene, specifically when the scene change occurs after the level is completed. So I am assuming I would use a singleton class, am I right? If so, how would I go about using it?
Singletons are ok, don't be afraid to use them. Many components of cocos2d are singletons.
I think what you need is some sort of structure that keeps track of the state of the game. (How many levels are completed/What should be the next level/etc). When your level select scene is loaded it should look up that 'state of the game' object (be it a singleton, plist, etc ) and displays itself accordingly.
I would stay away from passing information directly from one scene to another, this makes reordering them a headache later on.
First, let me make sure I understand the problem correctly.
You have a scene (A) with a sprite in it.
You transition to another scene (B) for the game play.
The game ends and you transition back to scene A.
When scene A redisplays, you want to change the image displayed by the sprite.
If I've got this right, then regardless of whether singletons are good or bad, you don't need one for this.
If, like me, you've created your sprite using a display frame from the CCSpriteFrameCache, then you can simply change the frame you want the sprite to use when "A" is redisplayed.
Some sample code demonstrating this can be seen in another question:
How to switch the image of a CCSprite
(Certainly, if I've got this right, then feel free to just dupe this)
I am working on a music notation app which has is a music staff class (CCNode) and a note class (CCSprite).
Notes are added to the music staff like:
// MusicStaff.m
[self addChild:note];
Notes have a particle emitter, and this needs to be added to the parent.. I am of the opinion that doing something like:
// Note.m
[self.musicStaff addChild:self.emitter];
is not cool because I don't like the idea of notes controlling the staff--- I like to think of the staff as the one in control of what children it has.
I honestly feel like this particle emitter should be a child of Note, since it technically is part of the note, not part of the music staff-- so adding it to the music staff inherently feels wrong. However, from what I understand about cocos2d, although you can add a child to a CCSprite, the sprites do not manage the drawing of their children, so this particle emitter would not be visible.
That said, since as far as I know the only way to go about this is to add the emitter to the staff, I would prefer doing:
// MusicStaff.m
[self addChild:note];
[self addChild:note.emitter];
However, a team member on my project feels this is "backwards" and "dumb", and that the note should add the emitter directly to its parent. I just seeking some feedback as to if my thoughts on this are indeed "backwards" and "dumb", or if I have a valid point…
Also I am curious if there is another way to solve this problem, like adding the emitter directly to the note and making it draw its children somehow?
In terms of object-oriented design, if the Emitter is created by its parent the Note, I don't think it should add itself to the Staff. If anyone has to talk to the Staff, let it be its direct child, the Note. Even better, make the Note respond to questions asked by the Staff, so in the end the Staff controls what it wants to show.
You can add the particles as children of the sprite and they will be drawn. Whatever resource gave you the idea that child nodes are not drawn is wrong.
What I think you may have misunderstood is the issue of sprite batching. In that case, when you do use CCSpriteBatchNode, you can only add CCSprite objects to the batch node and the batch node's children. So in that case trying to add a particle effect or any other node as child of a sprite-batched sprite will cause an assertion in cocos2d telling you that this is illegal.
As for the "issue of dumbness": Neither option is really dumb, but adding the emitter to the parent has a minor benefit in that the note takes control over what is inherently the note's responsibility: managing the lifetime of the note's particle effect.
I have a class that instantiates a hierarchy of classes. The interface should hide all of the internal hierarchy by presenting a single class reference. (An the interface to the classes at intermediate levels should hide internal implementation details the same way.) I am having a little difficulty deciding how to flow the interface parameters down to the lowest levels and to flow state changes up to the top level. What I have come up with gets messy.
I have seen some discussions of the blackboard pattern but, from what I have seen, it looks ad hoc and flat rather than hierarchical. (Although flat vs hierarchical may not be important.)
The class I have is a calendar view, subclasing UIView. The hierarchy includes a header class, a grid class (instantiated more than once), a tile class (instantiated many times), plus some helper classes. It's based on Keith Lazuka's kal calendar in Objective C. I've decided to reorganize it for my needs, and wanted to rethink this part of it before I introduce problems with flexibility.
My question is in the title.
I have decided that the KVO (Key Value Observer) design pattern makes sense for bubbling state information from the lower levels to the top. Only that state information that needs to flow up is observed at each of the corresponding layers.
In this application, a tapped tile event is sent to the observer at the next level up (the grid level), telling it that the user has selected a date, which is a property of the tile class.
At the grid level, it changes its state based on its current state and the new information that it's observer receives from the tile. (In my calendar, the user can select a range of dates by choosing start date and end date, and can continue tapping tiles to change his date range selection.) This changes state at the grid level translates into a change in the start and/or the end date, so an NSDictionary property is updated.
At the calendar level, an observer sees the startDate/endDate dictionary change. Regardless of which grid this came from (there are two grids, and only one of them is active at a time. The tiles and the calendar do not need to be aware of this) the calendar's start and end dates are updated.
The calendar is a view that is planted into one of the other views of the application, initialized with a month to be shown, and with a selected date range (start and stop dates). Information is flowed down from the top though the pointers to each of the immediate subviews at any layer. Those subviews take care of keeping their subviews configured. I have eliminate the need to add explicitly add delegate methods or callbacks, and that has simplified the connections from top to bottom. Connections only go the the immediate level above or below in the hierarchy.
This may not seem like much after all, because it looks rather straightforward. But it wasn't clear until I spent awhile thinking about it. I hope it gives others some ideas for their own code. I would still like to know if there are other suggestions responding to my question.
I'm trying to re-implement an old Reversi board game I wrote with a bit more of a snazzy UI. I've looked at Jens Alfke's GeekGameBoard code for inspiration, and CALayers looks like the way to go for implementing the UI.
However, there is no clean separation of model and view in the GeekGameBoard code; the model is the view, which makes it hard to, for example, make a copy of the game state in order to perform game-tree search for the AI player. However, I don't seem to be able to come up with an alternative way to structure that allows a separation of model and view that doesn't involve a constant battle to keep two parallel grids (on for the model, one for the view) in synch. This, of course, has its own problems.
How do I best best implement the relationship between an AI search-friendly model structure and a display-friendly view? Any suggestions / experiences would be appreciated. I'm dreading / half expecting an answer along the lines of "there is no good answer: deal with it as best you can" but I'm prepared to be surprised!
Thanks for the answer Peter. I'm not entirely sure I understand it fully, however. I can see how this works if you just have an initial set of pieces that are moved around, and even removed, but what happens when a person puts a new piece down? Would it work like this:
User clicks in the view.
View click is translated to a board location and controller is notified.
Controller creates a new Board with the successor state (if appropriate, i.e. it was a legal move).
The view picks up the new board via its bindings, tears down the existing view/layer hierarchy and replaces it with the current state.
Does that sound right?
PS: Sorry for failing to specify whether it was for the iPhone or Mac. I'm most interested in something that works for the iPhone, but if I can get it to work nicely on the Mac first I'm sure I can adapt the solution to work on the iPhone myself. (Or post a new question!)
In theory, it should be the same as for an NSView-based UI: Add a model property (or properties), expose it (or them) as bindings, then bind the view (layer) to the model through a controller.
For example, you might have a Board class with Pieces on it (each Piece having a reference to the Player who owns it), with all of those being model classes. Your controller would own a Board, and your view/layer would be able to display a Board, possibly with a subview/sublayer for each Piece.
You'd bind your board view/layer to the controller's board property, and in your view/layer's setter for that property, create a subview/sublayer for each piece, and bind it to any properties of the Piece that it will need. (Don't forget to unbind and remove all the subviews/sublayers when replacing the main view/layer's Board.)
When you want to move or modify a Piece, you'd do so using its own properties; these will translate to property accesses on the view/layer. Ostensibly, you'll have your layer's properties set up to animate changes (so that, for example, changing a Piece's position will cause the layer for it to move accordingly).
The same goes for the Board. You might let the user change one or both tile colors; you'll bind your color well(s) through your game controller to its Board object, and with the view/layer bound to the same property of the same Board, it'll pick up the change automatically.
Disclaimers: I've never used Core Animation for anything, and if you're asking about Cocoa Touch instead of Cocoa, the above solution won't work, since it depends on Cocoa Bindings.
I have an iPhone application where almost all of the interface is constructed using Core Animation CALayers, and I use a very similar pattern to what Peter describes. He's correct in that you want to treat your CALayers as if they were NSViews / UIViews and manage their logic through controllers and data via model objects.
In my case, I create a hierarchy of controller objects which also function as model objects (I may refactor to split out the model components). Each of the controller objects manages a CALayer, so there ends up being a parallel CALayer display hierarchy to the model-controller one. For my application, I need to perform calculations for equations constructed using this hierarchy, so I use the controllers to provide calculated values from the bottom of the tree up. The controllers also handle user editing events, such as the insertion of new suboperations or deletion of operation trees.
I've created a layer-hosting view class that allows the CALayer tree to respond to touch or mouse events (the source of which is now available within the Core Plot project). For your boardgame example, the CALayer pieces could take in the touch events, and have their controllers manage the back-end logic (determine a legal move, etc.). You should just be able to move pieces around and maintain the same controllers without tearing everything down on every move.