Transferring Touch points between classes [Objective-C] - objective-c

I'm working on an alternate version of a program I already wrote, it's mostly for the sake of understanding a little more.
In x-Code (Objective-C), I have a ViewController that calls out a UIView (GraphicsView) that draws a line from the center to the touch point. This sub-view is smaller than the larger ViewController.
The view controller has a label that outputs the coordinates of the touched point.
So far I was able to get everything working, so that if you touch inside the sub-view you get the line AND the coordinates updated and if you touch outside the sub-view you only get the coordinates updated. I did this using delegates which was a little complicated.
I've been reading some books and I learned about using the extern feature and global variables (which are supposed to be bad practice) and I wanted to try the same app but using global variables.
I declared my externCGPoint in the ViewController.h and imported it on the GraphicsView.m file and in the method touches began I put the definition of myGlobalPoint = touchedpoint; followed by an NSLog that displays the coordinates. So far it works. (However it does not update the coordinates)
However whenever I touch outside the sub-view, into the main view the app crashes with a EXC_BAD_ACCESS message. From what I understand the main View cannot access the global variable if it's declared in another class ?
I've read many there stack overflows about this and I;ve tried it in the methods suggested but I keep getting this error.

Related

Don't Understand Apple's takeFloatValueFrom: Example

I'm no iOS guru but I know enough to build apps. I know and understand the patterns, UIKit, and Objective-C. I'm now learning Mac Development and this little bit of "Cocoa Bindings Programming Topics" has me stumped:
Take as an example a very simple application in which the values in a text field and a slider are kept synchronized. Consider first an implementation that does not use bindings. The text field and slider are connected directly to each other using target-action, where each is the other’s target and the action is takeFloatValueFrom: as shown in Figure 2. (If you do not understand this, you should read Getting Started With Cocoa.)
This example illustrates the dynamism of the Cocoa environment—the values of two user interface objects are kept synchronized without writing any code, even without compiling.
(Emphasis mine)
Huh? Wouldn't you need to create outlets? And an IBAction that goes something like
- (IBAction)takeFloatValueFrom:(id)sender {
self.slider.floatValue = [sender floatValue];
self.textField.floatValue = [sender floatValue];
}
Is this something Mac-specific? How do you actually hook up two controls with target-action in a XIB without writing any code and have their values locked?
When you're setting up an interface in Interface Builder, you can specify that it sends a message to another object whenever it changes in some way. What this example is showing is that you can hook these two objects up such that whenever the slider changes, it sends the message takeFloatValueFrom: to the text field, and vice-versa.
takeFloatValueFrom: is a method defined on NSControl, and both a text field and a slider are subclasses of NSControl.

Hooking up Chipmunk bodies to UIKit components?

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

iOS 4.3: Difficulty attaching graphic to an Image View

I am a relative novice who is teaching himself Objective-C on Xcode to develop some simple iPhone game apps. I have done some reading on this but fear I'm missing something basic and obvious.
I made a simple "Hello, World" and, based on opinions in various forums, I decided to do a Tic Tac Toe. I found a nice video and built a version based on that, which ran fine. However, my own interpretation is already running into trouble.
I'm using Xcode 4.0.2 on Snow Leopard. I chose a View-Based Application template and pulled a large image view onto the layout to hold a PNG called board. I put nine small image views on the large one to hold individual cells for X and O (and created some PNGs for the images). I just attached board.png to the big image view through IB so that works fine.
Next I tried to associate cell 1 with x.png by assigning it to a variable called ximg. This is all set up in the view controller's viewDidLoad method like so -- "ximg = [UIImage imageNamed:#"x.png"];". I then used the code "cell1.image = ximg;" -- also in viewDidLoad. X appeared on the board when I built and ran.
My next step was cell 2. I wanted to use a variable in a custom method this time, so I could change it in the future. I declared a method "- (void)setcell2" (bad camelCase, I know). I put the following method into my view controller implementation file:
-(void)setcell2 {
cell2.image = ximg;
}
I also added the following message to viewDidLoad -- "[self setcell2];"
As you'd guess, I was figuring that when the app loaded, viewDidLoad would send that message to setcell2, which would attach another X in the second box, but this didn't happen.
If someone could give me some idea of what I'm overlooking, I'd be gratified. Example code is appreciated but I can figure that out with time. This is not homework. Thanks for reading!
Most likely you haven't set your cell up properly in Interface Builder. Make sure you have connected the property to the outlet.

iOS 3 - UITabBarItems disappear from UITabBar after a memory warning occurs

At a great number of requests from people using older iOS hardware, I'm currently refactoring and optimizing my app so it will work on iOS 3. That being said I've got a glitch with my UITabBar that I can replicate on all of the iPhone 3G units I've tested it on.
The glitch appears to have been fixed in iOS 4, but I was wondering if before that time, anyone else had this glitch as well and had figured out a (relatively elegant) workaround for it.
The problem is what you can see below; when a memory warning occurs and all of the views offscreen are released, when I bring a view controller with a tab bar back on screen, all of the UITabBarItems that are supposed to be in it are gone. As far as I can see, they're not being drawn at all; ie tapping the tab bar has no effect. After setting breakpoints and examining the UITabBar and its items in memory, they're all still there (ie not getting released), just that they're not getting redrawn when the UITabBar is re-created in the controller loadView method.
My app works similar to the official Twitter app in that I implemented my own version of UITabBarController so I could control the integration of it with a parent UINavigationController properly. I set it up as closely as possible to the original UITabBarController class though, with all of the child view controllers handling their own respective UITabBarItems and initializing them inside the class' init methods. Once the child view controllers are passed to my TabController object via an accessor method, the tabBarItems are accessed and added to the UITabBar view.
Has anyone seen this behaviour before and know of a way I can fix it? I'm hoping there's a really simple fix for this since it already works in iOS 4, so I don't want to hack it up too badly.
Thanks a lot!
After a bit of research, I think I found a solution to this. It's not the most elegant solution I was after, but it definitely works.
I'm guessing after a memory warning is triggered, something is happening to the UITabBarItem objects that basically renders them corrupt. I tried a lot of things (flushing out the UITabBar, re-creating the controllers array etc), but nothing worked.
I finally discovered that if you completely destroy the UITabBarItems and allocate new ones in their place, then those ones will work. :)
So my final solution to this was to add an extra condition in the viewDidLoad method of my controller that if the detected system was iOS 3, and there was already an array of UITabBarItems, it would go through each one, copy out all of the properties needed, destroy it, allocate a new one and then copy the old properties over to the new one.
I'm still going to keep an eye out for a better solution (I think there's a bit of overhead in this method), but thankfully at this stage, iOS 3 legacy support is becoming less and less of an issue. :)

Problems with an unusual NSOpenGLView setup

I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?