I am trying to change the texture of a sprite that I create from a SpriteBatchNode.
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
spritesBgNode = [CCSpriteBatchNode batchNodeWithFile:#"playingCards.pvr.ccz"];
[self addChild:spritesBgNode];
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"playingCards.plist"];
I searched and found examples that use
Texture2D *texture = [[Texture2D alloc] initWithImage:myUIImage]
[sprite setTexture: texture];
So my questions is: how do I get the image from my batchNode file? Or do I use another method to get a reference to the image in my playingCards.pvr.ccz file.
UPDATE
First off thanks for the response. So I have mySprite with the image of a King using the code snippet you provided. But I want to change the sprite's texture to display the back of the card (so it can be played face up or down) I have both images inside CCSpriteBatchNode.
But as you point out "You can't get the image from a batchNode", so I can't use [[Texture2D alloc] initWithImage:myUIImage]
So do I go about changing the sprite's image from face up to face down.
Thanks
If you want to display the images in your .pvr.ccz file to the screen then add the following code:
CCSprite * mySprite = [CCSprite spriteWithSpriteFrameName: #"name of sprite frame"];
[spritesBgNode addChild: mySprite];
Basically, to display parts of your batchNode, you need to add a sprite to it. The name of the sprite frame is in the .plist file you added to the FrameCache.
You can't get the image from a batchNode. UIImage is the iPhone API type of image, not cocos2d. In cocos2d, initWithImage:(UIImage*)image is provided for convenience.
If you use [[Texture2D alloc] initWithImage:myUIImage], the UIImage is used to create an NSData object, and [texture initWithData: data] is called internally. The image isn't stored for later use.
Update
The sprite works as a 'view to a batchNode' in this case. To view a different part of the batch node, change the frame of your sprite.
[mySprite setDisplayFrame:
[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName: #"back of card"]]];
Related
can you create a new CCSprite from a part of a CCSprite within a CCSpriteBatchNode?
For a long time, I've used SpriteFrameCache and BatchNode without a 100% understanding of the two, in particular how they relate to the textureCache. I could use some clear advice to accomplish the following:
Currently, I load a texture atlas into CCSpriteBatchNode and the frame list into the CCSpriteFrameCache and generate a sprite in what I think is basic standard fasion
CCSpriteBatchNode *batchNode = [CCSpriteBatchNode batchNodeWithFile:#"textureAtlasImage.png"];
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"textureAtlasFrames.plist"];
CCSprite * gameObject = [CCSprite spriteWithSpriteFrameName:#"gameObject.png"];
[self addChild:batchNode];
[batchNodeaddChild:gameObject];
For the sake of simplicity with the question, What I'd like to do is divide gameObject into 4 pieces programatically (rather than divide the original image into four pieces and adding each into the textureAtlasImage.png individually).
From reading, I'm thinking something like:
CCSpriteFrame * gameObjectFrame = [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"textureAtlasFrames.plist"];
CCTexture2D * gameObjectIndividualTexture = [[gameObjectFrame] texture];
CCSpriteFrame * gameObjectPartFrame = [CCSpriteFrame frameWithTexture:gameObjectIndividualTexture offset: rectInPixels: ] ;
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFrame:gameObjectPartFrame name:#"gameObjectPart1"];
But my questions then are:
Is this already in the batchNode? If not, how do I actually create the sprite out of the gameObjectPart1 using batchNodes?
Is it wasteful to add another spriteframe to the cache that duplicates data elsewhere?
You should be able to adjust the texture rect after creating the sprite. Create four sprites using the same sprite frame, then set each sprite's texture rect to one of the 4 smaller regions you want them to use.
Use the sprite.textureRect property to get the CGRect with the original size and change the size respectively origin. For example the lower left rectangle can be created by setting textureRect to the same rect but with size.width and size.height halved.
I have a UIImageView that I am trying to make do an animation of a set of UIImages that are created by flipping other UIImages. Here's my code:
turtle = [[UIImageView alloc] initWithFrame:CGRectMake(self.view.frame.size.width-200,
self.view.frame.size.height - sand.frame.size.height - turtle.frame.size.height - 10 - heightOfKeyboard,
100,100)];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle1.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle2.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle3.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle2.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[self.view addSubview: turtle];
Problem is, when I try and make it animate from the array of flipped images, it shows the originals, not the flipped ones (i.e., when I do this):
turtle.animationImages = flippedTurtleArray;
turtle.animationDuration = 0.8f;
turtle.animationRepeatCount = 0;
[turtle startAnimating];
the original non flipped images are shown.
Now, if I do this however:
turtle.image = [flippedTurtleArray objectAtIndex:1];
the flipped image is shown. I thought maybe you can't do the animation with CGImage, but couldn't find that anyone else had had the same problem. Any ideas?
Thanks,
Sam
I would look at the CGLayer property, and specifically the transform property of the layer.
The Transform property is animatable, and would allow you to apply an arbitrary transform to you original image without having to manage your intermediate flipped Turtle Array.
The transform applied to the layer’s contents. Animatable.
#property CATransform3D transform
Discussion
This property is set to the identity transform by default. Any transformations you apply to the layer occur relative to the layer’s anchor point.
Availability
Available in iOS 2.0 and later.
Related Sample Code
oalTouch
Declared In
CALayer.h
If you have other reasons for wanting the array, then you may want to render the flipped image into a separate image, and pass that into your array.
Similarly if you can use iOS 7 as a target, take a look a SpriteKit, it may make the whole task much simpler.
I am loading an image in a UIImageView which I then add to a UIScrollView.
The image is a local image and is about 5000 pixels in height.
The problem is that when I add the UIImageView to the UIScrollView the thread is blocked.
It is obvious because when I do this I can not scroll the UIScrollView till the image is displayed.
Here's the example.
UIScrollView *myscrollview = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 768, 1004)];
myscrollview.contentSize = CGSizeMake(7680, 1004);
myscrollview.pagingEnabled = TRUE;
[self.view addSubview:myscrollview];
NSString* str = [[NSBundle mainBundle] pathForResource:#"APPS.jpg" ofType:nil inDirectory:#""];
NSData *imageData = [NSData dataWithContentsOfFile:str];
UIImageView *singleImageView = [[UIImageView alloc] initWithImage:[UIImage imageWithData:imageData]];
//the line below is the blocking line
[scrollView addSubview:singleImageView];
It is the last line in the script that blocks the scroller. When I leave it out everything works perfect, except for the fact the image is not showing of course.
I seem to recall that using multithreading does not work on UIView operations so I guess that's out of the question ass well.
Thanks for your kind help.
If you're providing these large images, you should maybe check out CATiledLayer; there's a video of a good presentation on how to use this from WWDC 2010.
If these aren't your images, and you can't downsample them or break them into tiles, you can draw the image on a background thread. You may not draw to the screen graphics context on any main thread but the main thread, but that doesn't prevent you from drawing to a non-screen graphics context. On your background thread you can
create a drawing context with CGBitmapContextCreate
draw your image on it just as you would draw onto the screen in drawRect:
when you're done loading and drawing the image invoke your view's drawRect: method on the main thread using performSelectorOnMainThread:withObject:waitUntilDone:
In your view's drawRect: method, once you've fully drawn your image on the in-memory context, copy it to the screen using CGBitmapContextCreateImage and CGContextDrawImage.
This isn't trivial, you'll need to start your background thread at the right time, synchronize access to your images, etc. The CATiledLayer approach is almost certainly the better one if you can find a way to manipulate the images to make that work.
Why you want to load such huge image to memory at one time? split it into many small images and load/free it dynamically.
Try allocating the UIImageView without a UIImage, and add it as a sub-view to the UIScrollView.
Load the UIImage in a separate thread, and get the method running on the other thread to set the image property of the UIImageView when the image is loaded into memory. Also you'll probably encounter some memory problems as an image of this size loaded into a UIImage will probably be 30MB+
UIScrollView *myscrollview = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 768, 1004)];
myscrollview.contentSize = CGSizeMake(7680, 1004);
myscrollview.pagingEnabled = TRUE;
[self.view addSubview:myscrollview];
NSString* str = [[NSBundle mainBundle] pathForResource:#"APPS.jpg" ofType:nil inDirectory:#""];
UIImageView *singleImageView = [[UIImageView alloc] init];
[scrollView addSubview:singleImageView];
//Then fire off a method on another thread to load the UIImage and set the image
//property of the UIImageView.
Just keep an eye on memory and beware of using convenience constructors with UIImage (or any object that could end up being huge)
Also where are you currently running this code?
I have an NSView that contains an NSScrollView containing a CALayer-backed NSView. I've tried all the usual methods of capturing an NSView into an NSImage (using -dataWithPDFInsideRect, NSBitmapImageRep's -initWithFocusedViewRect, etc.) However, all these methods treat the CALayer-backed NSView as if it doesn't exist. I've already seen this StackOverflow post, but it was a question about rendering just a CALayer tree to an image, not an NSView containing both regular NSView's and layer-backed views.
Any help is appreciated, thanks :)
The only way I found to do this is to use the CGWindow API's, something like:
CGImageRef cgimg = CGWindowListCreateImage(CGRectZero, kCGWindowListOptionIncludingWindow, [theWindow windowNumber], kCGWindowImageDefault);
then clip out the part of that CGImage that corresponds to your view with
-imageByCroppingToRect.
Then make a NSImage from the cropped CGImage.
Be aware this won't work well if parts of that window are offscreen.
This works to draw a view directly to an NSImage, though I haven't tried it with a layer-backed view:
NSImage * i = [[NSImage alloc] initWithSize:[view frame].size];
[i lockFocus];
if ([view lockFocusIfCanDrawInContext:[NSGraphicsContext currentContext]]) {
[view displayRectIgnoringOpacity:[view frame] inContext:[NSGraphicsContext currentContext]];
[view unlockFocus];
}
[i unlockFocus];
NSData * d = [i TIFFRepresentation];
[d writeToFile:#"/path/to/my/test.tiff" atomically:YES];
[i release];
Have you looked at the suggestions in the Cocoa Drawing Guide? ("Creating a Bitmap")
To draw directly into a bitmap, create a new NSBitmapImageRep object with the parameters you want and use the graphicsContextWithBitmapImageRep: method of NSGraphicsContext to create a drawing context. Make the new context the current context and draw. This technique is available only in Mac OS X v10.4 and later.
Alternatively, you can create an NSImage object (or an offscreen window), draw into it, and then capture the image contents. This technique is supported in all versions of Mac OS X.
That sounds similar to the iOS solution I'm familiar with (using UIGraphicsBeginImageContext and UIGraphicsGetImageFromCurrentImageContext) so I'd expect it to work for your view.
Please have a look at the answer on this post: How do I render a view which contains Core Animation layers to a bitmap?
That approach worked for me under similar circumstances to your own.
Stuck again. :(
I have the following code crammed into a procedure invoked when I click on a button on my application main window. I'm just trying to tweak a CIIMage and then display the results. At this point I'm not even worried about exactly where / how to display it. I'm just trying to slam it up on the window to make sure my Transform worked. This code seems to work down through the drawAtPoint message. But I never see anything on the screen. What's wrong? Thanks.
Also, as far as displaying it in a particular location on the window ... is the best technique to put a frame of some sort on the window, then get the coordinates of that frame and "draw into" that rectangle? Or use a specific control from IB? Or what? Thanks again.
// earlier I initialize a NSImage from JPG file on disk.
// then create NSBitmapImageRep from the NSImage. This all works fine.
// then ...
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == Nil)
NSLog(#"could not create CI Image");
else {
NSLog (#"CI Image created. working on transform");
CIFilter *transform = [CIFilter filterWithName:#"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:#"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform rotateByDegrees:3];
[transform setValue:affineTransform forKey:#"inputTransform"];
CIImage * myResult = [transform valueForKey:#"outputImage"];
if (myResult == Nil)
NSLog(#"Transformation failed");
else {
NSLog(#"Created transformation successfully ... now render it");
[myResult drawAtPoint: NSMakePoint ( 0,0 )
fromRect: NSMakeRect ( 0,0,128,128 )
operation: NSCompositeSourceOver
fraction: 1.0]; //100% opaque
[inputCIimage release];
}
}
Edit #1:
snip - removed the prior code sample mentioned below (in the comments about drawRect), which did not work
Edit #2: adding some code that DOES work, for anyone else in the future who might be stuck on this same thing. Not sure if this is the BEST way to do it ... but it does work for my quick and dirty purposes. So this new code (below) replaces the entire [myResult drawAtPoint ...] message from above / in my initial question. This code takes the image created by the CIImage transform and displays it in the NSImageView control.
NSImage *outputImage;
NSCIImageRep *ir;
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage]; //outputImageView is an NSImageView control on my application's main window
Drawing on screen in Cocoa normally takes place inside an -[NSView drawRect:] override. I take it you're not doing that, so you don't have a correctly set up graphics context.
So one solution to this problem is to create a NSCIImageRep from the CIImage, then add that representation to a new NSImage, then it is easy to display the NSImage in a variety of ways. I've added the code I used up above (see "edit #2"), where I display the "output image" within an NSImageView control. Man ... what a PITA this was!