I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?
Related
I have an existing openGL context, using an OpenGL 2.1 core profile. I am able to draw objects/textures/etc no problem. However, now I want to be able to have my application to launch a separate NSWindow, with an NSOpenGLView, that displays part of a texture I drew in the original renderer's view. After some reading, I eventually bumped into the topic of context sharing, which I think may be the route I have to take if I want to pull this off.
My shared openGL context is of type - CGLContextObj, but I don't know what to do with it as my window resides in a different process. I've read the Apple documentation on rendering contexts, but I am unable to apply the concepts they laid out if there's barely any examples for me to go through. Any advice will be really appreciated, thank you in advance.
EDIT:
Perhaps I did not give enough description, my apologies. I subclass my NSOpenGLView, and it's init I do the following:
// *** irrelevant initialization stuff above inside init *** //
// Get pixel format from first context to be used for NSOpenGLView when it's finally initialized later
_pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void*)_attribs];
// We will create CGPixelFormatObj from our C array of pixel format ttributes
GLint nPix;
CGPixelFormatObj myCgPixObj;
CGLChoosePixelFormat(_attribs, &myCgPixOPbj, &nPix);
// Now that we have the pixel format in CGPixelFormatObj form, create CGLContextObj to be passed in later when we init NSOpenGLView
CGLContextObj newContext;
CGLCreateContext(myCgPixObj, mainRenderingContext, &newContext);
// Create an NSOpenGLContext object here to feed into NSOpenGLView
NSOpenGLContext* _contextForGLView = [[NSOpenGLContext alloc] initWithCGLContextObj:newContext];
[newContext setView:self];
[self setOpenGLContext:newContext];
// We don't need this anymore
CGLDestroyPixelFormat(myCgPixObj);
return self;
I am able to draw objects in this view just fine. But I get a blank white rectangle whenever I try to use the textures created in the main rendering context. I'm a little lost on how to proceed from here, I have never dealt with shared contexts before.
Seems like I got it working, partially at least since I had to force the view to redraw by moving my Window around to actually render the texture from the main context (another problem for another time!). Anyways, here's how I did it:
My main rendering context is supplied by a host application (yes, I'm working on a plugin), and is of type CGLContextObj. I wrap that context in an NSOpenGLContext object via calling initWithCGLContextObj
Next step was to create an NSOpenGLPixelFormat object, initializing it with the pixel format attributes used by the host application's renderer. This step is important as it ensures that the rendering context that will be used in my view will have the same OpenGL core profile, along with other attributes used by the host application.
Then in my subclassed NSOpenGLView, I create a new NSOpenGLContext object, preferably in the prepareOpenGL method, by using initWithFormat:shareContext: for allocation. I used the NSOpenGLPixelFormat and NSOpenGLContext objects created previously to pass as parameters.
Upon assigning the newly created context to my view, I was able to render the textures from the main rendering context.
I'm working on an alternate version of a program I already wrote, it's mostly for the sake of understanding a little more.
In x-Code (Objective-C), I have a ViewController that calls out a UIView (GraphicsView) that draws a line from the center to the touch point. This sub-view is smaller than the larger ViewController.
The view controller has a label that outputs the coordinates of the touched point.
So far I was able to get everything working, so that if you touch inside the sub-view you get the line AND the coordinates updated and if you touch outside the sub-view you only get the coordinates updated. I did this using delegates which was a little complicated.
I've been reading some books and I learned about using the extern feature and global variables (which are supposed to be bad practice) and I wanted to try the same app but using global variables.
I declared my externCGPoint in the ViewController.h and imported it on the GraphicsView.m file and in the method touches began I put the definition of myGlobalPoint = touchedpoint; followed by an NSLog that displays the coordinates. So far it works. (However it does not update the coordinates)
However whenever I touch outside the sub-view, into the main view the app crashes with a EXC_BAD_ACCESS message. From what I understand the main View cannot access the global variable if it's declared in another class ?
I've read many there stack overflows about this and I;ve tried it in the methods suggested but I keep getting this error.
I'm no iOS guru but I know enough to build apps. I know and understand the patterns, UIKit, and Objective-C. I'm now learning Mac Development and this little bit of "Cocoa Bindings Programming Topics" has me stumped:
Take as an example a very simple application in which the values in a text field and a slider are kept synchronized. Consider first an implementation that does not use bindings. The text field and slider are connected directly to each other using target-action, where each is the other’s target and the action is takeFloatValueFrom: as shown in Figure 2. (If you do not understand this, you should read Getting Started With Cocoa.)
This example illustrates the dynamism of the Cocoa environment—the values of two user interface objects are kept synchronized without writing any code, even without compiling.
(Emphasis mine)
Huh? Wouldn't you need to create outlets? And an IBAction that goes something like
- (IBAction)takeFloatValueFrom:(id)sender {
self.slider.floatValue = [sender floatValue];
self.textField.floatValue = [sender floatValue];
}
Is this something Mac-specific? How do you actually hook up two controls with target-action in a XIB without writing any code and have their values locked?
When you're setting up an interface in Interface Builder, you can specify that it sends a message to another object whenever it changes in some way. What this example is showing is that you can hook these two objects up such that whenever the slider changes, it sends the message takeFloatValueFrom: to the text field, and vice-versa.
takeFloatValueFrom: is a method defined on NSControl, and both a text field and a slider are subclasses of NSControl.
I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.
I am a relative novice who is teaching himself Objective-C on Xcode to develop some simple iPhone game apps. I have done some reading on this but fear I'm missing something basic and obvious.
I made a simple "Hello, World" and, based on opinions in various forums, I decided to do a Tic Tac Toe. I found a nice video and built a version based on that, which ran fine. However, my own interpretation is already running into trouble.
I'm using Xcode 4.0.2 on Snow Leopard. I chose a View-Based Application template and pulled a large image view onto the layout to hold a PNG called board. I put nine small image views on the large one to hold individual cells for X and O (and created some PNGs for the images). I just attached board.png to the big image view through IB so that works fine.
Next I tried to associate cell 1 with x.png by assigning it to a variable called ximg. This is all set up in the view controller's viewDidLoad method like so -- "ximg = [UIImage imageNamed:#"x.png"];". I then used the code "cell1.image = ximg;" -- also in viewDidLoad. X appeared on the board when I built and ran.
My next step was cell 2. I wanted to use a variable in a custom method this time, so I could change it in the future. I declared a method "- (void)setcell2" (bad camelCase, I know). I put the following method into my view controller implementation file:
-(void)setcell2 {
cell2.image = ximg;
}
I also added the following message to viewDidLoad -- "[self setcell2];"
As you'd guess, I was figuring that when the app loaded, viewDidLoad would send that message to setcell2, which would attach another X in the second box, but this didn't happen.
If someone could give me some idea of what I'm overlooking, I'd be gratified. Example code is appreciated but I can figure that out with time. This is not homework. Thanks for reading!
Most likely you haven't set your cell up properly in Interface Builder. Make sure you have connected the property to the outlet.