IOS5 GLKit GLView and Hit testing - testing

In the new GLKit GLView reference, there is this warning that is emphasized:
Important: Your drawing method should only modify the contents of the framebuffer object. Never attempt to read the pixel information from the underlying framebuffer object, modify or dispose of the framebuffer object, or read its other properties by calling OpenGL ES functions. Instead, rely on the properties and methods provided by the GLKView class
Previously, with EAGLView the best practice published all over was for hit testing which included the use of glReadPixels using a framebuffer which was rendered but not presented.
With GLKView the only thing that seems to come close is a "-snapshot" call to make a UIImage object from the render. Then dig out the pixels. This seems very inefficient.
Is there a "best practice" for hit testing with the new GLKit funcitons? It seems that binding and rebinding of a seperate framebuffer are possible but then I'm not sure of what the dramatic warning in the GLKView reference means.
Any ideas on the best approach for hit testing when using GLKit?

Check out this very informative SO post which includes sample code. I believe it is exactly what you're looking for- it worked great for me.

Related

Making IKImageView aware of my custom NSImageRep

In my application, I’ve written a custom NSImageRep to handle a proprietary image format. The application’s primary view is an IKImageView, which I intend to load the images I’ve made the custom NSImageRep for into for viewing and manipulation.
If I create a CGImageRef of these images and then pass the reference over to the image view it works fine, but this is not ideal. If possible, I’d like to make use of IKImageView’s setImageWithURL: method, as this is mentioned as being the preferred method by the docs, plus it’s just cleaner. Unfortunately the view seems entirely ignorant of my NSImageRep and simply fails to load the image.
Is there anything that can be done to make the image view understand custom representations?
I think you need to call NSImageRep's registerImageRepClass: sometime early in your app's startup.
I'm fairly sure you also need to implement imageUnfilteredTypes in your sub-class
From: https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSImageRep_Class/Reference/Reference.html#//apple_ref/occ/clm/NSImageRep/registerImageRepClass:

UINavigationBar drawRect Alternative (aka, Need CoreGraphics calls in a category)

I recently discovered that in > iOS5 UINavigationBar does not get its drawRect called. I want to figure out how to draw with Core Graphics in a category.
The end goal I am trying to achieve is eliminating images from my app and have everything drawn at runtime. I am also trying to make this library automatic, so that users don't have to think about using my custom classes.
Is there a way to replace a class with one of your own at runtime? like: replaceClass([UINavigationBar class], [MyCustomBar class]);
Thanks in advance.
Is there a way to replace a class with one of your own at runtime?
In Objective-C this is know as class posing.
Class posing is based on the use of NSObject's poseClass method, which is now deprecated (on 64 bit platforms, including the iPhone).
Alternative approaches have been investigated; you can read about one here, but they do not seem quite to fit the bill.
I found the solution, Instead of messing with draw rect, I just made some methods that draw to a UIImage then set the image as the background view for the elements i am customizing. It makes my custom UI magic again.

Hooking up Chipmunk bodies to UIKit components?

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

Animation blocks in iOS using block objects?

From the Apple documentation on animating property changes in a view,
In order to animate changes to a property of the UIView class, you
must wrap those changes inside an animation block. The term animation
block is used in the generic sense to refer to any code that
designates animatable changes. In iOS 4 and later, you create an
animation block using block objects. In earlier versions of iOS, you
mark the beginning and end of an animation block using special class
methods of the UIView class. Both techniques support the same
configuration options and offer the same amount of control over the
animation execution. However, the block-based methods are preferred
whenever possible.
Other than the confusing terminology between an animation block and an objective-c block object, I am wondering what are some good resources and examples for using block objects to do animations with the UIView class? I have looked through the Apple documentation and also googled for some examples and could not find very many helpful resources. Also, what can we do to make sure that it is backwards compatible with devices earlier than iOS 4? I read somewhere that using a block object in earlier versions will cause a crash?
Here are some pointers to the Apple Documentation
Core Animation Programming Guide
Core Animation Cook Book
Animation Types and Timing Programming Guide
A Short practical Guide to Blocks (which contains a code sample to animate an UIView, see Listing 1-1)
Blocks Programming Topics
HTH

Using Opacity-generated layers in Quartz 2d instead of drawing to the view's layer

Firstly, be gently with me. I'm new to Objective C and iOS programming.
I've just purchased a copy of Opacity and I'm playing around with layers. My only grade A success so far is to design and successfully generate my own design of a calculator - so I haven't got that far!
Specifically, I want to know how to use Opacity CALayer subclasses to 'replace' the backing layer on a UIView. Adding sublayers to the existing view layer is obvious enough - and I've managed to get that working - but how do I use an Opacity CALayer subclass to provide the initial content of the view's backing layer? I've tried copying the drawInContext code from an Opacity subclass into drawRect on my UIView subclass that hosts the proxy view for the controllers' 'view' property but (for some reason I can't fathom) that didn't work.
I assume that the view's original backing layer is item '0' in the layer array? My ultimate goal is to have just my layers behind the view without having to 'ignore' the original one.
If there's a chance that Dr Brad Larson might read this;...
I've watched your Quartz 2D and animation videos over and over and I've read all of the copies of your course notes that I can lay hands on but all the examples I've found start with an image already in the hosting view - onto which more layers are added, which isn't really what I"m after. I've also read the Big Nerd Ranch Guide and got all of Conway and Hillegass's example code too but - I'll be darned - they also start with an image already in the view!
If any one can help me out - just point me at the relevant documentation, please don't bother writing huge tranches of code here - I'd be seriously grateful.
VV.
PS: I'm deliberately NOT using IB yet as I want to grip the underlying mechanics of Cocoa Touch first and I won't use dot notation on principal :-). (See my profile!).
The trick is to override the view's class method 'layerClass' to be the base layer sub-class that's needed before nailing other layers into the hierarchy. This builds the view's base (implicit) layer when the view is instantiated and then I can slap my other layers down afterwards.
This is a fun game. I'll get the mind-set soon!
This technique differs markedly from the same requirement using NSViews which have a 'setLayer' instance method that can be used to change the implicit layer AFTER instantiation. An expensive procedure not offered on a lightweight object like a UIView.