UINavigationBar drawRect Alternative (aka, Need CoreGraphics calls in a category) - objective-c

I recently discovered that in > iOS5 UINavigationBar does not get its drawRect called. I want to figure out how to draw with Core Graphics in a category.
The end goal I am trying to achieve is eliminating images from my app and have everything drawn at runtime. I am also trying to make this library automatic, so that users don't have to think about using my custom classes.
Is there a way to replace a class with one of your own at runtime? like: replaceClass([UINavigationBar class], [MyCustomBar class]);
Thanks in advance.

Is there a way to replace a class with one of your own at runtime?
In Objective-C this is know as class posing.
Class posing is based on the use of NSObject's poseClass method, which is now deprecated (on 64 bit platforms, including the iPhone).
Alternative approaches have been investigated; you can read about one here, but they do not seem quite to fit the bill.

I found the solution, Instead of messing with draw rect, I just made some methods that draw to a UIImage then set the image as the background view for the elements i am customizing. It makes my custom UI magic again.

Related

Draw a CGRect with Cocos2d

I'm using CGRect's for hitboxes, and my collisions seem to be a bit off. I want to quickly see where my hitboxes actually are.
I tried a bunch of different approaches but most of them seem to be outdated, or just didn't work for me.
I tried this already and a bunch of similar approaches.
What is the simplest way to show the borders of a CGRect?
With cocos2d 2.0, in ccConfig.h there is a CC_SPRITE_DEBUG_DRAW symbol. If you set that to 1, the box will be drawn during the visit cycle.
If CC_SPRITE_DEBUG_DRAW, as YvesLeBorg suggested, doesn't suit you, you can override draw method in you layer or nodes and draw in that method using helper functions from CCDrawingPrimitives.h. Don't forget to call [super draw].

Registering all view controllers for NSNotifications

I have a custom graphic that is to be displayed to a user when an event occurs. The graphic needs to be displayed on whichever viewController is currently being displayed to the user.
The way i have programmed it so far is by adding to ALL viewcontrtollers:
1) the .h file for the custom graphic class
2) an observer for the NSNotification event that is raised
3) the method which actually draws the graphic.
This doesnt feel like a very efficient way of doing things and i was wondering if anyone has a better way of doing things?
To me it sounds like you've done it in a fairly sane way. The only other way I can think is to just add the graphic to the window which would then overlay on the current view controller and you'd only need to have one object listening for the notification. You could use the app delegate for instance. But then you would have to worry about rotation of the screen yourself when adding the graphic over the top.
What you are doing is correct .. The only thing you can improve is to mauve the drawing graphics part to the custom graphic class.. (if you are not already doing so...
just Make a UIViewController variable as a member variable to the graphics class..and then set it up to the current view displaying..after you receive the notifications..and the class will itself draw the code based on the ViewController you set it up to
The reason it doesn't feel efficient is that you're duplicating a lot of code. That's more work at the outset, and it creates a maintenance headache. You should be taking advantage of the inheritance that's built into object oriented languages, including Objective-C.
If you want all your view controllers to share some behavior, then implement that behavior in a common superclass. Derive all your other view controllers from that superclass, and they'll all automatically get the desired behavior. Your superclass's initializer can take care of registering the view controller for the notification(s) that you care about, and -dealloc can unregister it. This way, you don't have to clutter up each view controller with the same repeated code, and if you want to change the code you only have to do it in one place.

Debugging subtle iOS view layout issues

Lately I've been running into some subtle layout issues in my iOS app. For example displaying a viewController from one part of the app causes the layout of some subviews to be altered (the z-axis ordering changes). Another subtle issue is the navigation bar flickering slightly.
What are some techniques for debugging these issues?
I'm especially interested in printing/logging properties of objects. For example I'd like to just dump/print/log all properties of the viewController referenced above to see exactly what changes. Then perhaps one can use symbolic breakpoints to pin-point the cause.
Check out DCIntrospect. It's a tool that can be very helpful for looking at view's info conveniently.
You can use KVO to observe frames changing, so you know what changes when, from and to what values. You can even use it to fix properties to some contant value. (See Prevent indentation of UITableViewCell (contentView) while editing)
You can use reflection to loop through all properties of an object. I don't know how such a broad approach would help you, but it is possible. (See Loop through all object properties at runtime)
Another technique to use is to subclass a UIView with override methods for re-positioning a view, or other aspects - then you can set breakpoints or log when the frame changes, or other attributes.
To use the UIView debugging class you can just change the type of a View in InterfaceBuilder to be your custom view type instead of UIView.
Use iOS App layout Debugging tool
revealapp.com
Just integrate revealapp SDK in your app and work as firebug

Making classes work together in obj-C

I'm writing a program for iPhone that will first let the user take a photo, then will dynamically retrieve a colour of the place where the user taps on the image, and draw a rectangle of that colour. I have two relevant classes for this: AppViewController and AppView. The former contains all the UI elements and IBActions, the latter the position of last tap, the touches-handling methods and the drawRect (and a static method to get colour data at a given coords of an image).
What I wanted to do is to put the touch-handling (calling drawRect in touchesMoved/Ended) and the drawRect in the AppViewController. That doesn't work, since that class doesn't inherit from UIView, but from UIViewController. What's the correct way to do that?
Another way to phrase that: How to constantly change something (well, constantly as long as the user is swiping across the screen) in a class that doesn't support touch-detection methods?
(This probably doesn't explain it well. Please ask clarifying questions).
I think the delegate pattern might be helpful to you in this situation. You could call your delegate's shouldUpdateRectangle selector in touchesMoved/Ended.
There is not really a correct way to move the view's behaviors into a view controller, since that is not the way the classes are meant to be used. You should probably look at what's driving you to try to subvert the framework's design this way, because that is likely going to be easier to fix.
It's not uncommon, though, for a view to call out to a supporting class for help in this stuff. You could certainly have your view's drawRect: call methods in the view controller, though I would be careful about mixing their concerns too much, because it could get hard to figure out who's responsible for what.

Whats the best way to add variations of the same character to a view in Cocoa

I am confused about how to go adding objects (images,etc) into an app. I will keep my example very basic so I can get a grasp on this. Let's say I want simple objects in an app. Say they are the smilies like the ones that are available in this forum software. If you wanted to add a bunch (like 4 not 400) to a view, is it better to just add them with using a UIImage or should you create a "smilieGuy" class with the various smiley face images (happy, sad, mad) and a method to change their mood (image to reflect mood). From what I understand, with the class you could create a happy object, a sad object, etc in your view based off of the class and then at anytime you could say changeMood and change the image to whatever mood you wanted.
Is the class approach actually possible and is it a better approach?
The class approach is preferable.
It allows you to separate the interface ([userFace setHappy]) from the implementation (self.image = [UIImage imageNamed:#"Happy.png"]). You can then change the logic, or create further variations without having to change any of the other display code.
I would also suggest creating an Emoticon class as a subclass of UIImageView. Inside the class, you could load your multiple images and use them to set the image property of your superclass.
You might also want to check out the animation properties of UIImageView to achieve Skype style smilies.