touchesEnded:withEvent: not detecting the top node in SpriteKit - objective-c

I have a custom class Object derived from SKSpriteNode with an SKLabelNode member variable.
#import <SpriteKit/SpriteKit.h>
#interface Object : SKSpriteNode {
SKLabelNode* n;
}
#end
In the implementation, I set up the SKLabelNode.
#import "Object.h"
#implementation Object
- (instancetype) initWithImageNamed:(NSString *)name {
self = [super initWithImageNamed:name];
n = [[SKLabelNode alloc] initWithFontNamed:#"Courier"];
n.text = #"Hello";
n.zPosition = -1;
//[self addChild:n];
return self;
}
Note: I have not yet added the SKLabelNode as a child to Object. I have the line commented out.
I have a separate class derived from SKScene. In this class, I add an instance of Object as a child. This is my touchesBegan:withEvent: method in this class:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch* touch = [touches anyObject];
CGPoint loc = [touch locationInNode:self];
NSArray* nodes = [self nodesAtPoint:loc]; //to find all nodes at the touch location
SKNode* n = [self nodeAtPoint:loc]; //to find the top most node at the touch location
NSLog(#"TOP: %#",[n class]);
for(SKNode* node in nodes) {
NSLog(#"%#",[node class]);
}
}
When I tap on the instance of Object in my SKScene class, it works as I expected. It detects the node is an instance of Object. It logs:
TOP: Object
Object
Now if I go back to my Object class and uncomment [self addChild:n] so that the SKLabelNode is added as a child to Object, this is logged:
TOP: SKLabelNode
Object
SKLabelNode
SKSpriteNode
Why does adding an SKLabelNode as a child to a class derived from SKSpriteNode cause the touch to detect the object itself, along with an SKLabelNode and an SKSpriteNode?
Furthermore, why is the SKLabelNode on top? I have its zPosition as -1 so I would assume that in touchesBegan:withEvent: of another class, the Object would be detected as on top?
Surely, I am not understanding an important concept.

If you use nodeAtPoint to check for the node in -touchesEnded:, it will definitely return the topmost node at the point of the touch, which in this case, is the SKLabelNode. There are a few ways you can go around this:
Making the subclass detect it's own touches
The cleanest way would be to make the Object subclass detect it's own touches. This can be done by setting it's userInteractionEnabled property to YES. This can be done in the init method itself.
-(instanceType) init {
if (self = [super init]) {
self.userInteractionEnabled = YES;
}
return self;
}
Now, implement the touch delegate methods in the class itself. Since the LabelNode has its userInteractionEnabled property set by default as NO, the touch delegates will still be triggered when it is tapped.
You can use either delegation, NSNotificationCenter or a direct message to the parent in the touch delegate.
-(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
//Send a message to the delegate, which is the SKScene, should be included in protocol
[self.delegate handleTouchOnObjectInstance:self];
//or, send a message directly to the SKScene, the method should be included in the interface.
MyScene *myScene = (MyScene*)self.scene;
[myScene handleTouchOnObjectInstance:self];
}
Checking for the node's parent
In the touch delegate of the SKScene, you can check whether the parent node of the SKLabelNode returned from nodeAtPoint is an instance of the C class.
//Inside the touch delegate, after extracting touchPoint
SKNode *node = [self nodeAtPoint: touchPoint];
if ([node isKindOfClass: [SKLabelNode class]]) {
if ([node.parent isKindOfClass: [Object class]])
[self performOperationOnCInstance: (Object*)node.parent];
} else if ([node isKindOfClass: [Object class]])
[self performOperationOnCInstance: (Object*)node];
Use nodesAtPoint instead of nodeAtPoint
This method returns an array of all nodes detected at a certain point. So if the SKLabelNode is tapped, the nodesAtPoint method will return the Object node as well.
//Inside the touch delegate, after extracting touchPoint
NSArray *nodes = [self nodesAtPoint: touchPoint];
for (SKNode *node in nodes) {
if ( [node isKindOfClass: [Object class]]) {
[self performOperationOnObjectInstance: (Object*)node];
break;
}
}
EDIT:
There is a difference between the zPosition and the node tree.
The zPosition only tells the scene to render nodes in a certain order, whatever their child order may be. This property was introduced to make it easier to specify the order in which a node's children are drawn. If the same were to be achieved by adding the nodes to the parent in a specific order, it becomes difficult to manage the nodes.
When the nodesAtPoint method is called, the scene conducts a depth-first search through the node tree to see which nodes intersect the given touch point. This explains the order in which the array is populated.
The nodeAtPoint method returns the deepest node according to zPosition which the scene could find.
The following section from Apple's documentation explains it all extensively: Creating The Node Tree

Related

Drag and Drop from the Finder to a NSTableView weirdness

I'm trying to understand how best to impliment drag and drop of files from the Finder to a NSTableView which will subsequently list those files.
I've built a little test application as a proving ground.
Currently I have a single NSTableView with FileListController as it's datasourse. It's basically a NSMutableArray of File objects.
I'm trying to work out the best / right way to impliment the drag and drop code for the NSTableView.
My first approach was to subclass the NSTableView and impliment the required methods :
TableViewDropper.h
#import <Cocoa/Cocoa.h>
#interface TableViewDropper : NSTableView
#end
TableViewDropper.m
#import "TableViewDropper.h"
#implementation TableViewDropper {
BOOL highlight;
}
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if (self) {
// Initialization code here.
NSLog(#"init in initWithCoder in TableViewDropper.h");
[self registerForDraggedTypes:#[NSFilenamesPboardType]];
}
return self;
}
- (BOOL)performDragOperation:(id < NSDraggingInfo >)sender {
NSLog(#"performDragOperation in TableViewDropper.h");
return YES;
}
- (BOOL)prepareForDragOperation:(id)sender {
NSLog(#"prepareForDragOperation called in TableViewDropper.h");
NSPasteboard *pboard = [sender draggingPasteboard];
NSArray *filenames = [pboard propertyListForType:NSFilenamesPboardType];
NSLog(#"%#",filenames);
return YES;
}
- (NSDragOperation)draggingEntered:(id <NSDraggingInfo>)sender
{
highlight=YES;
[self setNeedsDisplay: YES];
NSLog(#"drag entered in TableViewDropper.h");
return NSDragOperationCopy;
}
- (void)draggingExited:(id)sender
{
highlight=NO;
[self setNeedsDisplay: YES];
NSLog(#"drag exit in TableViewDropper.h");
}
-(void)drawRect:(NSRect)rect
{
[super drawRect:rect];
if ( highlight ) {
//highlight by overlaying a gray border
[[NSColor greenColor] set];
[NSBezierPath setDefaultLineWidth: 18];
[NSBezierPath strokeRect: rect];
}
}
#end
The draggingEntered and draggingExited methods both get called but prepareForDragOperation and performDragOperation don't. I don't understand why not?
Next I thought I'll subclass the ClipView of the NSTableView instead. So using the same code as above and just chaging the class type in the header file to NSClipView I find that prepareForDragOperation and performDragOperation now work as expected, however the ClipView doesn't highlight.
If I subclass the NSScrollView then all the methods get called and the highlighting works but not as required. It's very thin and as expected round the entire NSTableView and not just the bit below the table header as I'd like.
So my question is what is the right thing to sublclass and what methods do I need so that when I peform a drag and drop from the Finder, the ClipView highlights properly and prepareForDragOperation and performDragOperation get called.
And also when performDragOperation is successful how can this method call a method within my FileListController telling it to create a new File object and adding it to the NSMutableArray?
Answering my own question.
It seems that subclassing the NSTableView (not the NSScrollView or the NSClipView) is the right way to go.
Including this method in the subclass :
- (NSDragOperation)draggingUpdated:(id <NSDraggingInfo>)sender {
return [self draggingEntered:sender];
}
Solves the problem of prepareForDragOperation and performDragOperation not being called.
To allow you to call a method within a controller class, you make the delagate of your NSTextView to be the controller. In this case FileListController.
Then within performDragOperation in the NSTableView subclass you use something like :
NSPasteboard *pboard = [sender draggingPasteboard];
NSArray *filenames = [pboard propertyListForType:NSFilenamesPboardType];
id delegate = [self delegate];
if ([delegate respondsToSelector:#selector(doSomething:)]) {
[delegate performSelector:#selector(doSomething:)
withObject:filenames];
}
This will call the doSomething method in the controller object.
Updated example project code here.

Sprite with userInteractionEnabled set to YES does not receive touches when covered by normal sprites

I put a sprite A (subclassed to receive hits (userInteractionEnabled to YES)), and then a normal sprite B that does not take hits on top of that (userInteractionEnabled default NO), completely covering sprite A.
Tapping on sprite B, I assume that sprite A would get the touch but nothing happens. The part from docs about the matter is below.
I feel something is unclear here because it seems still that sprite B receives the touch but throws it away. OR, spriteA is removed from possible touch receiver because it's not visible.
From the docs:
https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Nodes/Nodes.html#//apple_ref/doc/uid/TP40013043-CH3-SW7
For a node to be considered during hit-testing, its
userInteractionEnabled property must be set to YES. The default value
is NO for any node except a scene node. A node that wants to receive
events needs to implement the appropriate responder methods from its
parent class (UIResponder on iOS and NSResponder on OS X). This is one
of the few places where you must implement platform-specific code in
Sprite Kit
Anyway to fix this?
As long as something's userInteractionEnabled is NO, it shouldn't interfere with other touch receivers.
Update: Even setting sprite B's alpha to 0.2, making sprite A very visible, will not make sprite A touchable. Sprite B just totally "swallows" the touch, despite being not enabled for interaction.
Here is my solution until Apple updates SpriteKit with correct behaviour, or someone acctually figures out how to use it like we want.
https://gist.github.com/bobmoff/7110052
Add the file to your project and import it in your Prefix header. Now touches should work the way intended.
My solution doesn't require isKindOfClass sort of things...
In your SKScene interface:
#property (nonatomic, strong) SKNode* touchTargetNode;
// this will be the target node for touchesCancelled, touchesMoved, touchesEnded
In your SKScene implementation
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* touch = [touches anyObject];
NSArray* nodes = [self nodesAtPoint:[touch locationInNode:self]];
self.touchTargetNode = nil;
for( SKNode* node in [nodes reverseObjectEnumerator] )
{
if( [node conformsToProtocol:#protocol(CSTouchableNode)] )
{
SKNode<CSTouchableNode>* touchable = (SKNode<CSTouchableNode>*)node;
if( touchable.wantsTouchEvents )
{
self.touchTargetNode = node;
[node touchesBegan:touches
withEvent:event];
break;
}
}
}
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
[self.touchTargetNode touchesCancelled:touches
withEvent:event];
self.touchTargetNode = nil;
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
[self.touchTargetNode touchesMoved:touches
withEvent:event];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[self.touchTargetNode touchesEnded:touches
withEvent:event];
self.touchTargetNode = nil;
}
You will need to define the CSTouchableNode protocol... call it what you wish :)
#protocol CSTouchableNode <NSObject>
- (BOOL) wantsTouchEvents;
- (void) setWantsTouchEvents:(BOOL)wantsTouchEvents;
#end
For your SKNodes that you want to be touchable, they need to conform to the CSTouchableNode protocol. You will need to add something like below to your classes as required.
#property (nonatomic, assign) BOOL wantsTouchEvents;
Doing this, tapping on nodes works as I would expect it to work. And it shouldn't break when Apple fixes the "userInteractionEnabled" bug. And yes, it is a bug. A dumb bug. Silly Apple.
Updated
There is another bug in SpriteKit... the z order of nodes is weird. I have seen strange orderings of nodes... thus I tend to force a zPosition for nodes/scenes that need it.
I've updated my SKScene implementation to sorting the nodes based on zPosition.
nodes = [nodes sortedArrayUsingDescriptors:#[[NSSortDescriptor sortDescriptorWithKey:#"zPosition" ascending:false]]];
for( SKNode* node in nodes )
{
...
}
Here is an example for getting touches etc..
The Scene
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
/* Setup your scene here */
HeroSprite *newHero = [HeroSprite new];
newHero.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
[self addChild:newHero];
SKSpriteNode *overLapSprite = [SKSpriteNode spriteNodeWithColor:[UIColor orangeColor] size:CGSizeMake(70, 70)];
overLapSprite.position = CGPointMake(CGRectGetMidX(self.frame) + 30, CGRectGetMidY(self.frame));
overLapSprite.name = #"overlap";
[self addChild:overLapSprite];
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKNode *node = [self nodeAtPoint:location];
if ([node.name isEqualToString:#"heroNode"]) {
NSLog(#"Scene detect hit on hero");
}
if ([node.name isEqualToString:#"overlap"]) {
NSLog(#"overlap detect hit");
}
// go through all the nodes at the point
NSArray *allNodes = [self nodesAtPoint:location];
for (SKNode *aNode in allNodes) {
NSLog(#"Loop; node name = %#", aNode.name);
}
}
The Subclassed sprite / Hero
- (id) init {
if (self = [super init]) {
[self setUpHeroDetails];
self.userInteractionEnabled = YES;
}
return self;
}
-(void) setUpHeroDetails
{
self.name = #"heroNode";
SKSpriteNode *heroImage = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"];
[self addChild:heroImage];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint locationA = [touch locationInNode:self];
CGPoint locationB = [touch locationInNode:self.parent];
NSLog(#"Hero got Hit at, %# %#", NSStringFromCGPoint(locationA), NSStringFromCGPoint(locationB));
}
#end
Tried many things, but the touch doesn't go through the overlapping sprite. I guess you can use the scene class to detect the touch via looping through the nodes then directly call that node.
I had this problem when a particle emitter covered a button...
I think this could be a design choice, but certainly not very flexible. I prefer to handle objects that can overlap (like game objects) in Sprite Kit at Scene level with nodeAtPoint: combined with isKindOfClass: methods. For objects like buttons that usually do not overlap and are placed in a different layer (like overlays and buttons), I handle their user interaction inside their classes.
I would like my touch events to "bubble up" the nodes tree as it happens in Sparrow Framework, but i fear it is more a feature request than a bug report.
P.S.: By handling touches at scene level you can easily touch and move nodes even if they are completely covered by non-movable and non-touchable nodes!
Solution with fewer lines of code. Remember to set userInteractionEnabled in every node you want to receive touch.
- (void)handleTap:(UITapGestureRecognizer *)gestureRecognizer {
CGPoint touchLocation = self.positionInScene;
SKSpriteNode *touchedNode = (SKSpriteNode *)[self nodeAtPoint:touchLocation];
NSArray *nodes = [self nodesAtPoint:touchLocation];
for (SKSpriteNode *node in [nodes reverseObjectEnumerator]) {
NSLog(#"Node in touch array: %#. Touch enabled: %hhd", node.name, node.isUserInteractionEnabled);
if (node.isUserInteractionEnabled == 1)
touchedNode = node;
}
NSLog(#"Newly selected node: %#", touchedNode);
}

scheduleOnce not working on my CCLayer class, when set in onEnter

I have a CCLayer class that is a child of a base class I created that is, itself, a child of CCLayer.
After creating this type of class hierarchy, I can not get the onEnter method of my child class to successfully schedule a method call. How can I fix this?
I am trying to get started on a gaming project but I've hit a bit of a brick wall. This seems to be a common problem, however, I can't find a scenario similar to mine and, thus, can't derive a solution.
I have a child CCLayer class that I've created, and it is essentially the exact same thing as the IntroLayer in the Cocos2D, starter iPhone template. In fact, my class is the exact same class with one major exception-- I created a base class that I derived from CCLayer, and that is used as my parent of my IntroLayer.
My problem is that in the onEnter method of my child layer, I see that my transition method is being set in my scheduleOnce assignment. The problem, though, is that my transition method is never being called?
I'm not sure what I am doing incorrectly. The code posted below is my .m file for a my IntroLayer.
IntroLayer.m
#implementation IntroLayer //Child of "MyBaseCCLayer"
+(CCScene *) scene
{
CCScene *scene = [CCScene node];
IntroLayer *layer = [IntroLayer node];
[scene addChild: layer];
return scene;
}
-(void) onEnter
{
//This is calling an overridden onEnter of my base CCLayer.
// See snippet below.
[super onEnter];
CGSize size = [[CCDirector sharedDirector] winSize];
CCSprite *background;
background = [CCSprite spriteWithFile:#"Default.png"];
[background setPosition:ccp(size.width/2, size.height/2)];
[background setZOrder: Z_BACKGROUND];
[self addChild: background];
//In one second transition to the new scene
// I can put a break point here, and this being set.
// It never gets called, though.
[self scheduleOnce:#selector(makeTransition:) delay:1.0];
}
-(void) makeTransition:(ccTime)delay
{
//Here is where I would transition to my menu.
// This is a fresh project, so it's going to a test screen.
// The problem is this method never fires.
[[CCDirector sharedDirector]
replaceScene:[CCTransitionFade
transitionWithDuration:0.5
scene:[TestLayer scene]
withColor:ccBLACK]];
}
-(void) dealloc
{
[super dealloc];
}
#end
MyBaseCCLayer.m
//^boilerplate snip^
-(void) onEnter
{
//Setup overlay
CGSize size = [[CCDirector sharedDirector] winSize];
ccTexParams tp = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
//overlay is a class variable, CCSprite object.
overlay = [CCSprite spriteWithFile:#"tile_overlay.png" rect:CGRectMake(0, 0, size.width, size.height)];
[overlay retain];
[overlay setPosition: ccp(size.width/2, size.height/2)];
[overlay.texture setTexParameters:&tp];
//Always on top!
[overlay setZOrder:Z_OVERLAY];
}
-(void) dealloc
{
[overlay release];
[super dealloc];
}
As you can see, there is nothing ground breaking going on here. I created this base class because I want to apply the overlay sprite to ever CCLayer in my application. This is why it made sense to create my own base, CCLayer. However, now that I've created my own base class, my IntroLayer won't call my scheduled method that I'm setting in onEnter. How can I get this to work?
Your custom class MyBaseCCLayer is overriding the onEnter method, but you omitted the call to [super onEnter]. It's easy to forget sometimes, but calling [super onEnter] is critical to the custom class. Here is the description of the method from the Cocos2D documentation:
Event that is called every time the CCNode enters the 'stage'. If the
CCNode enters the 'stage' with a transition, this event is called when
the transition starts. During onEnter you can't access a sibling
node. If you override onEnter, you shall call [super onEnter].
If you take a look inside CCNode.m, you will see what happens inside onEnter:
-(void) onEnter
{
[children_ makeObjectsPerformSelector:#selector(onEnter)];
[self resumeSchedulerAndActions];
isRunning_ = YES;
}
So you can see that if you override the onEnter method in your custom class, and you don't call [super onEnter], you miss out on this functionality. Essentially your custom node will remain in a 'paused' state.

Unique ID on NSViews

Is there any kind of ID that can be used and set in the .nib/.xib via Xcode that can be queried at runtime to identify a particular view instance from code?
In particular when having multiple copies of the same NSView subclass in our interface how can we tell which one we're currently looking at?
In Interface Builder, there is a way to set the "identifier" of an NSView. In this case, I'll use the identifier "54321" as the identifier string.
NSView Conforms to the NSUserInterfaceItemIdentification Protocol, which is a unique identifier as an NSString. You could walk through the view hierarchy and find the NSView with that identifier.
So, to build on this post about getting the list of NSViews, Get ALL views and subview of NSWindow, you could then find the NSView with the identifier you want:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSView *viewToFind = [self viewWithIdentifier:#"54321"];
}
- (NSView *)viewWithIdentifier:(NSString *)identifier
{
NSArray *subviews = [self allSubviewsInView:self.window.contentView];
for (NSView *view in subviews) {
if ([view.identifier isEqualToString:identifier]) {
return view;
}
}
return nil;
}
- (NSMutableArray *)allSubviewsInView:(NSView *)parentView {
NSMutableArray *allSubviews = [[NSMutableArray alloc] initWithObjects: nil];
NSMutableArray *currentSubviews = [[NSMutableArray alloc] initWithObjects: parentView, nil];
NSMutableArray *newSubviews = [[NSMutableArray alloc] initWithObjects: parentView, nil];
while (newSubviews.count) {
[newSubviews removeAllObjects];
for (NSView *view in currentSubviews) {
for (NSView *subview in view.subviews) [newSubviews addObject:subview];
}
[currentSubviews removeAllObjects];
[currentSubviews addObjectsFromArray:newSubviews];
[allSubviews addObjectsFromArray:newSubviews];
}
for (NSView *view in allSubviews) {
NSLog(#"View: %#, tag: %ld, identifier: %#", view, view.tag, view.identifier);
}
return allSubviews;
}
Or, since you are using an NSView subclass, you could set the "tag" of each view at runtime. (Or, you could set the identifier at run-time.) The nice thing about tag, is that there is a pre-built function for finding a view with a specific tag.
// set the tag
NSInteger tagValue = 12345;
[self.myButton setTag:tagValue];
// find it
NSButton *myButton = [self.window.contentView viewWithTag:12345];
Generic NSView objects cannot have their tag property set in Interface Builder. The tag method on NSView is a read-only method and can only be implemented in subclasses of NSView. NSView does not implement a setTag: method.
I suspect the other answers are referring to instances of NSControl which defines a -setTag: method and has an Interface Builder field to allow you to set the tag.
What you can do with generic views is use user-defined runtime attributes. This allows you to pre-set the values of properties in your view object. So if your view defined a property like so:
#property (strong) NSNumber* viewID;
Then in the user-defined attributes section of the Identity inspector in Interface Builder, you could add a property with the keypath viewID, the type Number and the value 123.
In your view's -awakeFromNib method, you can then access the value of the property. You will find that in the example above, the viewID property of your view will have been pre-set to 123.
Here's how to simulate "tags" in OSX without subclassing.
In iOS:
{
// iOS:
// 1. You add a tag to a view and add it as a subView, as in:
UIView *masterView = ... // the superview
UIView *aView = ... // a subview
aView.tag = 13;
[masterView addSubview:aView];
// 2. Later, to retrieve the tagged view:
UIView *aView = [masterView viewWithTag:13];
// returns nil if there's no subview with that tag
}
The equivalent in OSX:
#import <objc/runtime.h> // for associated objects
{
// OSX:
// 1. Somewhere early, create an invariant memory address
static void const *tag13 = &tag13; // put at the top of the file
// 2. Attach an object to the view to which you'll be adding the subviews:
NSView *masterView = ... // the superview
NSView *aView = ... // a subview
[masterView addSubview:aView];
objc_setAssociatedObject(masterView, tag13, aView, OBJC_ASSOCIATION_ASSIGN);
// 3. Later, to retrieve the "tagged" view:
NSView *aView = objc_getAssociatedObject(masterView, tag13);
// returns nil if there's no subview with that associated object "tag"
}
Edit: The associated object "key" (declared as const void *key) needs to be invariant. I'm using an idea by Will Pragnell (https://stackoverflow.com/a/18548365/236415).
Search Stack Overflow for other schemes for making the key.
Here's a simple way get NSView tags in OSX without subclassing.
Although NSView's tag property is read-only, some objects
that inherit from NSView have a read/write tag property.
For example, NSControl and NSImageView have a r/w tag properties.
So, simply use NSControl instead of NSView, and disable (or ignore)
NSControl things.
- (void)tagDemo
{
NSView *myView1 = [NSView new];
myView1.tag = 1; // Error: "Assignment to readonly property"
// ---------
NSControl *myView2 = [NSControl new]; // inherits from NSView
myView2.tag = 2; // no error
myView2.enabled = NO; // consider
myView2.action = nil; // consider
// ---------
NSImageView *myView3 = [NSImageView new]; // inherits from NSControl
myView3.tag = 3; // no error
myView3.enabled = NO; // consider
myView3.action = nil; // consider
}
Later, if you use viewWithTag: to fetch the view, be sure to specify NSControl (or NSImageView) as the returned type.

Accessing CCLayer headache!

I'm having abit of trouble accessing my layers and its driving me insane. Basically, my layer-scene hierachy is as follows:
Compiler.m - CCLayer - holds the +(CCScene) method and loads all the other CCLayers.
Construct.m - CCLayer - holds the box2d engine and its components
Background.m - CCLayer - holds the Background.
Hud.m - CCLayer - holds the HUD.
within the Compiler implementation class, I add the Scene and all the relevant nodes to the Compiler CCLayer:
#implementation Compiler
+(CCScene *) scene{
Compiler *compiler = [CompileLayer node];
CCScene *scene = [CCScene node];
[scene addChild: compiler];
//Add A Background Layer.
Background *layerBackground = [Background node];
layerBackground.position = CGPointMake(100,100);
[compiler addChild: layerBackground z: -1 tag:kBackground];
//Add The Construct.
Construct *construct = [Construct node];
[compiler addChild: construct z: 1];
//Add A Foreground Layer Z: 2
//After background is working.
//Add the HUD
HudLayer *hud = [Hud node];
[compiler addChild: hud z:3];
}
This all works fine, my layers are added to the compiler and the compiler's scene is accessed by the delegate as predicted.
My problem is that im trying to access my Background CCLayers - CCsprite *background, inside the Construct layer so that I can move it according to the position of my Construct game hero.
I've tried many different methods, but I've currently decided to use a class method rather than an instance method to define CCSprite *background so that I can access it in my Construct Layer.
I've also tried accessing with #properties and initialising an instance variable of the class.
Here is my Background CCLayer:
#implementation Background
-(id) init
{
self = [super init];
if (self != nil)
{
CCSprite *temp = [Background bk];
[self addChild:temp z:0 tag:kBackGround];
}
return self;
}
+(CCSprite *)bk{
//load the image files
CCSprite *background = [CCSprite spriteWithFile:#"background.jpg"];
//get the current screen dimensions
//CGSize size = [[CCDirector sharedDirector] winSize];
background.anchorPoint = CGPointMake(0,0);
background.position = ccp(0,0);
return background;
}
#end
This works, it loads the image to the background layer.
Then finally, I try to access the background image from the Construct Layer.
#interface Construct : CCLayer{
CCSprite *tempBack;
}
#end
#implementation Construct
-(id) init{
tempBack = [Background bk]; //The background equals an instance variable
}
-(void) tick:(ccTime)dt {
tempBack.position.x ++; // To do Something here.
tempBack.opacity = 0.5; // Some more stuff here.
}
#end
This doesnt work, I receive a 'nil' pointer in some respects, tempBack doesnt access the background properly, or at all.
How can I access and modify the Background CCLayers Class Variable +(CCSprite) bk??
It probably doesn't work because your tempBack iVar is autoreleased in compiler layer and you don't retain it. Here's how your init method should look like:
-(id) init{
tempBack = [[Background bk] retain]; //The background equals an instance variable
}
Also - don't forget to release it in dealloc. And I am not sure if it will work either since you'll have different background sprite returned by bk class method (try printing temp and tempBack variable addresses and you'll see).
Update for notifications
In background layer, create the sprite and add it to the layer. Then afterwards add this snippet of code to subscribe to notifications:
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(updateBackground:) name:#"PlayerChangedPosition" object:nil];
Then in construct layer's update method (or any other scheduled method) send this notification with new player coordinates:
[[NSNotificationCenter defaultCenter] postNotificationName:#"PlayerChangedPosition" object:player];
Now in background's layer, implement the updateBackground:(NSNotification *)aNotification method:
- (void)updateBackground:(NSNotification *)aNotification {
// object message returns player object passed when posting notification
CGPoint newCoords = [[aNotification object] position];
// adjust background position according to player position
}