CGL vs AGL vs OpenGL vs NSOpenGL vs CoreAnimation(CALayer) - objective-c

I am trying to understand few things on Mac related to OpenGL framework integration in the form of layers. Well basically when I want to understand 3D technologies present in OS X and which layer is OpenGL's actual implentation layer.
From reading apple docs, below is what I have understood so far:
1.NSOpenGLContext object wraps a low-level, platform-specific Core OpenGL (CGL) context.
= This makes it clear that NSOpenGL makes use of CGL.
2.The AGL (Apple Graphics Library) API is part of the Apple implementation of OpenGL in Mac OS X.
= So, does AGL and CGL are related in any way?
3.CGL (Core OpenGL) is the lowest-level programming interface for the Apple implementation of OpenGL.
= Does it mean Standard OpenGL API's are just wrapper over CGL?
4.CoreAnimation seems to be combo of Core Graphics, Open-GL and Quick-time. But I am not sure what it uses underneath it, I mean actual implementation layer, is it again CGL?
Things are not completely clear to me. I am still reading though and I have asked somewhat related question in past but with incomplete knowledge.
I would really appreciate if someone can share his understanding on matter.

NSOpenGLContext, AGL and CGL are all APIs for setting up an OpenGL context you can draw into.
Use NSOpenGLContext unless you already know you have a reason not to.
Use AGL if you are writing a Carbon application or if you need compatibility with Mac OS 9 (As of 2012, that basically means: don't).
Both AGL and NSOpenGLContext are implemented on top of CGL. However, not all the necessary parts of CGL are actually public APIs. Last time I checked, the only public parts of the CGL API where the ones that allow you to create a fullscreen OpenGL context. If you want OpenGL in a window or you want the option of showing dialog boxes or some NSViews on top of your OpenGL, you probably can't use CGL.
CoreAnimation is a framework for (mostly UI) animations; you can use CoreAnimation without using OpenGL directly. I have never used it myself, but I assume it also allows you to create an OpenGL context for an animation layer. Use it if you already have other reasons to use CoreAnimation, or if you want to combine OpenGL graphics and Mac GUI widgets in creative ways.

Related

How do you draw text in DirectX 12?

This is a follow-up question of How do you draw text in DirectX 11?
In Direct3D-12, things got much more complex and since it's new I couldn't find any suitable libraries online.
I'm building a basic Direct3D12 FPS Test application, and I like to display the FPS data on screen with my rendered image.
The general answer to questions like this is "if you have to ask, then you probably should be using DirectX 11." DirectX 12 is a graphics expert API that provide immense control, and is not particularly concerned with ease-of-use for novices. See this thread for more thoughts in this vein.
With that out of the way, one option is to use device interop and Direct2D/DirectWrite. See Working with Direct3D 11, Direct3D 10 and Direct2D.
UPDATE: DirectX Tool Kit for DirectX 12 is now available. It includes a SpriteFont / SpriteBatch implementation that will draw text on Direct3D 12 render targets. See this tutorial.
Pure DirectX 12, then you need to load the font glyph data into a vertex buffer and render with a vertex shader and pixel shader. You mentioned libraries online, will this is expert stuff and fortunately James Stanard at Microsoft release a how to with their open source MiniEngine project. He handles multiple fonts, antialiasing, and drop shadows in DirectX 12.
Find the project files at GitHub https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine and check out Textrender.h and Textrender.cpp
If you want maximum feature set with minimum work you probably should go with DirectWrite on top of a D3D11 interop device, like Chuck said in his answer.
If you want to roll your own high performance text rendering you may want to take a look at the text renderer in the miniengine example repository on github, it has some interesting ideas.
Unfortunately the only ways have already been described. Interface with DirectWrite or create your glyph file system.
What you are doing is importing a texture file with glyphs on it, cutting out small squares around each character from the glyph texture file, and then gluing it all together to form a string. It results in some faster drawing (in some case).
I think the approach to this as referenced by the others is slightly outdated and destined to fail. Direct3D11 had the same lack of text drawing support as Direct3D12 (perhaps misinformation on that). It was Direct3D9 which had the built in text drawing support, which nonetheless worked fine, and later supported sprite batch drawing where you could render all text in one sprite.
It seems backwards to state that you simply "need to know" or "are not an expert" to implement such a basic yet tedious system. Such a system is destined to fail in the same way why no one wants to use Assembly to code something they can code in C and onward.
The D3D11 and D3D12 math library also suffers from the same failures. To define and convert vectors you are better off including D3D9X math or custom math structures because the newer methods included are so backwards. "Someone" made it and must like it, but I remember making a complaint showing how easy it is to do vector operations before vs afterward, it nearly doubles or triples the amount of lines needed to perform basic vectors operation and conversions, not even counting the amount of references and learning time you would need to see how someone else's lib works. It seems to be a big failure presented by mathematicians who were never good at programming.

Learning openGL on the mac: GLUT or native windowing system?

I'm starting to learn openGL on the mac, and being a cocoa developer already I find the native windowing system to be very appealing.
The books I'm reading all mentions the use of GLUT. Now, I'm wondering what the majority of people use for developing opengl programs, or if it's just a matter of taste.
Generally (Free)GLUT is not used for developing actual applications. It's used for demoing effects or simple things, which is why so many online materials use it. It takes all the cross-platform stuff and shoves it into a corner, thus focusing the user's attention on OpenGL.
GLUT owns the message processing loop. For simple applications, that's fine. But for most real programs, you will need to control message processing on your own. So GLUT fails. Also, GLUT doesn't really mesh well with the rest of the UI; it has no facilities for creating GUI controls (except for context menus).
If you're learning OpenGL, then you should be focused on learning OpenGL, not GUI programming. So use what makes sense for the task in question.

Porting from iOS to OS X

I have to port an iOS application to OS X. I have a little experience with iOS (although I didn't write the applications) and I would like have some suggests.
1) The application has some nice animations -- should I use Quartz to do them on OS X?
2) How can I change the View to have the same effect as the UINAvigationController on OSX? I searched for this component in the Object Library inside Interface Builder but I didn't find it.
This is liable to be quite an involved process, as (for example) none of the UIKit classes (currently) exist on Mac OS X.
As such, it's likely that you'll only be able to meaningfully retain the model level classes and that a substantial amount of the remaining code may need to be re-written.
My suggestion would be that you thoroughly prepare checklist that contains all tasks. It is not that simple as just looking for equivalent classes in Application Kit to UIKit classes.
it also depends on how app is written. If it has followed MVC properly, then complete files of business logic can be taken without any problems. If not, you will more or less write your new mac app from the scratch.
Yes, Core Animation is always a way to go in this cases, but it may happen that you'll encounter a lot of work because of possible different dimensions.
UINavigationController is something that doesn't exist on "normal" desktop interfaces. The closest equivalents are tab menus/tab sheets and you know how different they are.
If I were you, I would focus on binging content on the Mac, forget about interface concepts from iOS and rather make new interface concepts on the Mac that are compliant with Apple guidelines.

sample mac Firefox Plugins?

I'm trying to re-write an old image-viewing plugin for the mac. The old version uses QuickDraw (I said it was old) and resources (really really old) and so it doesn't work in Firefox 3.6 (which is why I'm re-writing it)
I know some Objective C, and so I figure I'm going co re-write this in that using new-fangled Mac routines and nibs, etc. However, I don't know how to start. I've got the BasicPlugin example that comes with mozilla source, so I know how to create a plugin with entrypoints, etc. However, I don't know how to create the nib, and how to interface Obj-C with the entrypoints, etc.
Does anyone know of a more advanced sample for mac than BasicPlugin.bundle? (Preferably simple enough that I can just look at it and understand it...)
thanks.
Sadly i don't really know of any good "intermediate" example. However, integrating Obj-C isn't that difficult. Thus, following is a short overview of what needs to be done.
You can use Obj-C and C/C++-sources in the same project, its just recommendable to keep them seperated to some extent. This can for example be done by letting the source file with the entry-points and other NPAPI-interfacing stay plain C or C++ files and e.g. forward calls into the plugin from there.
Opaque pointers help to keep a clean seperation, see e.g. here.
The main changes to your plugin include switching to different drawing and event models. These have to be negotiated in NPP_New(), here is an example for the drawing model. When using Cocoa and to support 64bit enviroments, you need to use the Cocoa event model.
To draw UI elements you should be able to use a NSGraphicsContext from the CGContextRef and then draw an NSView in the context. See also the details provided in this post and its follow-ups.

How do I get input from an XBox 360 controller?

I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)