Don't Understand Apple's takeFloatValueFrom: Example - objective-c

I'm no iOS guru but I know enough to build apps. I know and understand the patterns, UIKit, and Objective-C. I'm now learning Mac Development and this little bit of "Cocoa Bindings Programming Topics" has me stumped:
Take as an example a very simple application in which the values in a text field and a slider are kept synchronized. Consider first an implementation that does not use bindings. The text field and slider are connected directly to each other using target-action, where each is the other’s target and the action is takeFloatValueFrom: as shown in Figure 2. (If you do not understand this, you should read Getting Started With Cocoa.)
This example illustrates the dynamism of the Cocoa environment—the values of two user interface objects are kept synchronized without writing any code, even without compiling.
(Emphasis mine)
Huh? Wouldn't you need to create outlets? And an IBAction that goes something like
- (IBAction)takeFloatValueFrom:(id)sender {
self.slider.floatValue = [sender floatValue];
self.textField.floatValue = [sender floatValue];
}
Is this something Mac-specific? How do you actually hook up two controls with target-action in a XIB without writing any code and have their values locked?

When you're setting up an interface in Interface Builder, you can specify that it sends a message to another object whenever it changes in some way. What this example is showing is that you can hook these two objects up such that whenever the slider changes, it sends the message takeFloatValueFrom: to the text field, and vice-versa.
takeFloatValueFrom: is a method defined on NSControl, and both a text field and a slider are subclasses of NSControl.

Related

Cocoa Bindings, should I just be using KVO instead?

[self.toolController bind:#"fillColor" toObject:self.fillColorWell withKeyPath:#"color" options:kvoDict];
versus
[self.fillColorWell addObserver:self.toolController forKeyPath:#"color" options:NSKeyValueObservingOptionNew context:nil];
and in my toolController class, in my implementation for -observeValueForKeyPath:...
if( [keyPath isEqual:#"color"] ) {
self.fillColor = [object selectedObject];
}
Why would I pick one method over another to get the view to update to my model property?
For bindings the only code you have to write is for the bind itself and thats it. With KVO you would have to write to code to handle the notification. If your binding UI and using Interface Builder then you don't need any code at all, which can be useful / a time saver for the simpler things + you don't have to generic write boiler plate code to keep things is sync which you would to respond to the KVO notification.
I have read otherwise, but its my understanding (and I did a quick new project to verify this) that bindings are in both directions. So say if you bind a text field to an NSString, the variable changes when the textfield gets updated and you can change the variable and the text field updates. KVO would only notify you on the object you have specified the update for.
Some say bad things about bindings and that its good that they aren't part of iOS etc etc, but they work for the simple cases and so maybe you should just go with bindings until you find case where they are inappropriate. But having said if you want at some point to take your code over to iOS...
Hope thats a good enough answer for you :)

Hooking up Chipmunk bodies to UIKit components?

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

"Global" UIImagePicker Functionality for Multiple Classes

I'm working on an app that (among other things) uses UIImagePicker to grab an image from the device once the user has selected the SourceType by tapping the appropriate button. Different sections of the app will need to use this functionality, as well as the variable holding the image information once selected. When I first started the project I had all of my code to do this in a single class named ViewController. I'm now working on moving the individual sections of the app into their own classes, but I'd like to be able to have them all use the UIImagePicker functionality from a central location.
Along with the necessary UIImagePickerController methods and protocols, I have a method that presents a view with buttons for each available SourceType. Each of these buttons then send a message to methods to show the appropriate picker (or the camera). Once an image is selected, it is applied to a variable for use by the different sections.
I wanted to get suggestions on the best way to approach this before I went to deep down the wrong rabbit hole.
Thanks!
If a lot of your classes use this functionality, you can create a superclass (itself being a subclass of UIViewController).
This class will expose some method to launch the process you described, and some other to gather the information collected.
If you don't want to use inheritance, or you already to with another class, you can also create a separate class responsible for this process.
This class, which is not necessary a UIViewController, has to be instantiated and then called the same way the superclass described above.

How can I press a button to play a custom sound?

I'm a complete programming novice and I was wondering how I could have several buttons that play different sounds. I've searched all around and can't figure out this simple task. Thanks!
It's a simple task once you know enough about Objective-C and the Cocoa (Cocoa-Touch on iPhone) frameworks.
Rather than just searching the net for examples you would be better off looking for references on how to program Objective-C and Cocoa.
The steps are:
In Xcode, create a controller class that has a method such as this: - (IBAction)playSound:(id)sender;
Create the UI using Interface Builder.
Place the buttons you want on this interface and configure their labels.
For each of the buttons set a numeric tag (you can do this in interface builder).
For each of the buttons set the target to be the controller and the action to be the method you created in step 1. (You do this by ctrl dragging the button onto the controller object in Interface Builder)
In the controller class, fill out the method by writing a for loop that checks the tag of of the button (which you can get through the 'sender' object and then play a sound based on the tag.
Depending on your level of experience with Cocoa, this might not make sense right now, but if you read through the references and tutorials for Objective-C and Cocoa each of these steps will become clearer and you'll be able to complete this simple task yourself.

Problems with an unusual NSOpenGLView setup

I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?