I can take answers in either Swift or Objective-C.
I'm using SceneKit, and I have a 2D OverlayScene that's displaying on top of the 3D GameScene. I'd like GameScene to be able to reference back and forth between OverlayScene so that I can detect taps, run functions, etc. But whenever I try to create an instance of Overlay, I get various crashes. I have tried a few things, including but not limited to:
/* THIS IS IS MY 3D GAME-SCENE */
// Some crash about not using "init(size:CGSize)".
let overlayScene = SK_OverlayScene()
// There is no such thing as "self.size" in SceneKit.
let overlayScene = SK_OverlayScene(size: self.size)
// Some crazy crash I can't figure out at all.
let overlayScene = SK_OverlayScene(size: myGameViewController.sceneView.bounds.size)
In other words, I have tried many different solutions. If you don't believe that I did one of those solutions correctly, then ask me for the exact code and I'll give it to you.
My question is this: am I even going about this the correct way? Should I be using instances of the overlay scene, or should I be trying to contact the scene directly? I'm not really sure what's wrong here, that's why I'm asking for help.
EDIT: I have already tried to move the code inside OverlayScene's init(size:CGSize) into a simple init(), but that is also giving me problems.
Related
I'm working on an alternate version of a program I already wrote, it's mostly for the sake of understanding a little more.
In x-Code (Objective-C), I have a ViewController that calls out a UIView (GraphicsView) that draws a line from the center to the touch point. This sub-view is smaller than the larger ViewController.
The view controller has a label that outputs the coordinates of the touched point.
So far I was able to get everything working, so that if you touch inside the sub-view you get the line AND the coordinates updated and if you touch outside the sub-view you only get the coordinates updated. I did this using delegates which was a little complicated.
I've been reading some books and I learned about using the extern feature and global variables (which are supposed to be bad practice) and I wanted to try the same app but using global variables.
I declared my externCGPoint in the ViewController.h and imported it on the GraphicsView.m file and in the method touches began I put the definition of myGlobalPoint = touchedpoint; followed by an NSLog that displays the coordinates. So far it works. (However it does not update the coordinates)
However whenever I touch outside the sub-view, into the main view the app crashes with a EXC_BAD_ACCESS message. From what I understand the main View cannot access the global variable if it's declared in another class ?
I've read many there stack overflows about this and I;ve tried it in the methods suggested but I keep getting this error.
I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.
I'm learning Objective-c and made a little practice app to take input from an NSSlider and set the level to it. However, I would like to know if there is any way to make the level indicator update with the dragging of the slider. Currently, it only updates when I let go of the slider. I saw a couple of references to a setContinuous method, but it didn't seem to do anything. If that method is completely unrelated, please constrain your laughter. Also, it would be awesome if you could add code snippets to show me where to put the method.
-setContinuous: should do the trick, if you're sending it to the right object. Or, if you've set up your interface in a .xib file, check the 'continuous' box (I don't remember the actual label) for the slider.
I am a relative novice who is teaching himself Objective-C on Xcode to develop some simple iPhone game apps. I have done some reading on this but fear I'm missing something basic and obvious.
I made a simple "Hello, World" and, based on opinions in various forums, I decided to do a Tic Tac Toe. I found a nice video and built a version based on that, which ran fine. However, my own interpretation is already running into trouble.
I'm using Xcode 4.0.2 on Snow Leopard. I chose a View-Based Application template and pulled a large image view onto the layout to hold a PNG called board. I put nine small image views on the large one to hold individual cells for X and O (and created some PNGs for the images). I just attached board.png to the big image view through IB so that works fine.
Next I tried to associate cell 1 with x.png by assigning it to a variable called ximg. This is all set up in the view controller's viewDidLoad method like so -- "ximg = [UIImage imageNamed:#"x.png"];". I then used the code "cell1.image = ximg;" -- also in viewDidLoad. X appeared on the board when I built and ran.
My next step was cell 2. I wanted to use a variable in a custom method this time, so I could change it in the future. I declared a method "- (void)setcell2" (bad camelCase, I know). I put the following method into my view controller implementation file:
-(void)setcell2 {
cell2.image = ximg;
}
I also added the following message to viewDidLoad -- "[self setcell2];"
As you'd guess, I was figuring that when the app loaded, viewDidLoad would send that message to setcell2, which would attach another X in the second box, but this didn't happen.
If someone could give me some idea of what I'm overlooking, I'd be gratified. Example code is appreciated but I can figure that out with time. This is not homework. Thanks for reading!
Most likely you haven't set your cell up properly in Interface Builder. Make sure you have connected the property to the outlet.
I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?