What is the Cocoa/OSX equivalent of wglMakeCurrent or glXMakeCurrent? - objective-c

I understand that Cocoa requires that windows be created/managed on the main thread. So, I'd like to have two or three windows with unique contexts, but I'd really prefer to draw to each of them from separate threads. Plus, a little bit of Google searching seems to indicate that rapidly context-switching on one thread is pretty expensive/slow.

You might want to look at the CGL interface for fast context switching, specifically: CGLSetCurrentContext. However, it may be more consistent to use the makeCurrentContext method for NSOpenGLContext in a Cocoa application.

Related

Synchronization and threading for an agent-based modeling project in Objective-C

First of all, I'm an Objective-C novice. Most of my background is in Java.. Also since most Objective-C questions revolve around Cocoa, I should point out that this is on GNUStep.
For a school project, I'm creating a simple agent-based-modeling framework. These frameworks are typically used to model complex systems (like the spreading of diseases). My framework features two main objects: a world and a bug. The world consists of "layers", each of which is associated with a toroidal grid. The world can be populated by bugs, and each bug has an x and a y coordinate, and a layer that it belongs to.
My general idea is to populate the world with bugs, and then fire of threads for each of the bugs and let them do what they want. You can create any kind of bug by subclassing the main Bug class and by implementing an act method defined in a protocol. This way you can have various types of custom bugs and custom behavior. Bugs should be able to interact with the world and each other (removing bugs from the world, adding bugs to the world, moving itself around). As you can see, this is quickly headed to multi-threading hell.
Currently I have a bunch of #synchronized blocks and I'm having a hard time ensuring that the world always remains in a consistent state. This is made especially difficult since the bug needs to communicate with and act on the world and vice-versa. I am trying to implement a simple bug called a RandomBug that randomly moves around the world. Even this is proving to be difficult because I'm seeing potential problems where state can become corrupted or invalid.
I started taking a look at NSOperation and NSOperationQueue because it appears that this might make things easier. I have two questions pertaining to this:
Is there an easy way to perform NSOperations repeatedly (i.e., at specific intervals).
If I set the number of maximum concurrent operations on the thread to 1, do I still need #synchronized blocks? Wouldn't only one thread be interacting with the world at a given time?
Is there a better way to tackle this sort of problem (multiple threads interacting with one shared resources in a repeated manner)?
Should I forgo threading altogether and simply iterate through bugs on the world and activate them in a random manner?
Sounds like you might want something ala a game/simulation loop... So you have an "update world" phase of your run loop for each time step of your simulation (triggered by an NSTimer), where each bug gets a chance to interact with the world; repeat. Unless your bugs are CPU intensive, this might be the way to go....
As for using NSOperation--sure, this will potentially let you use all your CPU cores, however if there's lots of contention for accessing the world state, this may not be a win after all. In that case, you might try making each tile of your world a separate object you can #synchronized against, reducing contention and allowing better use of your CPU(s).
Using a single NSOperationQueue and setting maxConcurrentOperations = 1 is the same as implementing a game loop, basically. If you use an NSOperationQueue, but don't set maxConcurrentOperations, I'd expect NSOperationQueue to run as many operations simultaneously as you have CPU cores.

When Would Anyone Want To Use NSThreads over the GCD?

Are there any cases when anyone would want to use raw NSThreads instead of GCD for concurrency? I love the GCD, but I want to know if I will need to use NSThreads for Cocoa/Cocoa-Touch eventually.
i use pthreads for control, good performance, and portability. sometimes, you might opt to use NSThread for the extra NSObject interfacing it offers.
there are a few lower level interfaces where you need to coordinate threads with the APIs you use (e.g. realtime I/O or rendering). sometimes you have flexibility regarding the thread you use, sometimes it is convenient to use NSThread in this situation so you can easily use CF or NS run loops with these interfaces. So the run loop parameter you set up on your thread is likely of more interest to the API than the thread itself. in these cases, GCD may not necessarily be an alternative.
but… most devs won't need to drop to these levels often.
You should essentially almost never need to use the NSThread/pthread APIs directly on OS X or iOS. On other platforms, possibly yes (though GCD is becoming more widely ported to *BSD, Linux and even Windows - see Wikipedia page for Grand Central Dispatch), but on Apple OS platforms you're almost always going to get a better result by allowing the system to do thread lifecycle management for you. The only case where you might conceivably want to do your own thread management are in highly real-time scenarios where you need to manage thread priorities and have direct control over thread latency by balancing the amount of work each thread is doing by hand.
There may be some special situations where you have to do something strange that cannot be done with GCD. But anything that you can do with GCD you should do it that way (GCD and threads are not mutually exclusive, if you need to actually use a thread you need not change any of the GCD stuff you already have).
Not sure however what the case would be. Maybe if you need to setup a secondary specialized RunLoop (not sure if it can be done with GCD but surely it can with a thread). Or there may be some other special case I cannot figure at the moment.

Webkit vs Processing for Interactive Applications

I know this sounds a little bizarre, but there is a very simple application I want to write, a sort of unique image viewer, which requires some interactivity with the host system at the user level. Simplicity when developing is a must as this is a very small side project. The project does require some amount of graphical work and quite a bit of mouse based interactivity (as well as some keyboard shortcuts), but quite frankly, I don't want to dig my hands into OGL for something this small. I looked at the available options, and I think I've narrowed it down to two main choices: Webkit (through either QtWebkit or WebkitGtk), and the language Processing.
Since I haven't actually used Processing but I do have some amount of HTML5 canvas and Javascript experience, I am somewhat tempted to using a Webkit based solution. There are however, several concerns I have.
How is Webkit's support for canvas, specifically for more graphically intensive processes?
I've heard that bridging is handled better in QtWebkit than WebkitGtk. Is this still true?
To what degree can bridging actually do? Can a Webkit based application do everything that an application which interacts with the files on the system needs?
Looking at Processing, there are similarly, a couple things I'm wondering.
Processing is known for its graphical capabilities, but how capable is it for writing a general everyday desktop application?
There are many sources that link Processing to Java, both in lineage as well as in distributing applications over the web (ie: JApplets). Is the "Application Export" similarly closely integrated with Java?
As for directly comparing the two, the main concern I do have is the overhead of each. I want the application to start up as snappy as possible, and I know that Java has a bit of an overhead regarding start up because it first has to start up the interpreter. How do Processing and QtWebkit/WebkitGtk compare for start up?
Note that I am targeting the Linux platform only.
Thanks!
It's difficult to give a specific answer, because you're actually asking a few different kind of questions - and some of them you could be more precise.
Processing is a subset or child of java - it's really "just" a java framework with an free ide that hides the messy setup work of building an applet, so that a user can dive in and write something quickly without getting bogged down in widgets and ui, etc. So processing can exist by itself and the end user needs to know nothing of Java (except syntax - processing is java, so the user must learn java syntax).
But a programmer who already knows java can exploit the fun quick nature of processing and then leverage their normal java experience for whatever else is needed - everything of java is in processing, just a maybe slightly hidden (but only at first) It's also possible to import the processing.jars into an existing java program and use them there. See http://processing.org/learning/eclipse/ form more information.
"how capable is it for writing a general everyday desktop application?" - Not particularly on it's own (it's not made to be), but some things are possible and easy (i.e. file saving & loading, non-standard gui, etc.), and in some ways it's similar to old school actionscript or lingo. There is a library called controlP5 that makes gui stuff a bit easier.
Webkit is another kettle of fish, especially if you aren't making a web-based thing (it sounds like you're thinking on using the webkit libraries as part of a larger program. I'll admit I don't have the dev expertise with those specific libraries to give you the answer you really want, but I'm pretty certain that unless you have programming experience beyond html5/javascript you'll probably get going much faster with processing.
Good luck with whichever path you choose!

If I write a framework that gets information from the Internet, should I make a degelate or use blocks?

Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions.
What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework.
Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented.
Thanks.
I really like blocks but I would be tempted to use a delegate protocol in this case. Network connections can fail in a large number of ways and their delegates tend to keep a fair amount of stateful information about them. I find that that maps well to a delegate protocol with a number of optional methods.
If you're providing a very simplified API for accessing network data then a success/failure pair of blocks might be sufficient. Personally I find that I have to deal with alot of different cases which use many delegate methods on a stateful delegate object. For example; should I retry failed connections immediately or later, does the relative priority of failed connections change, can I make us of a partial response, should I switch a connection to wifi when it becomes available, do I offer a user a chance to authenticate if prompted, do I display incremental progress in a connection? You could handle all of those with blocks but I find that I would rather have a delegate class managing the connection.
Without knowing more about what data you intend for your interface to fetch I don't know that I can be more specific but. I would be tempted to allow users of the API to manage their own connection state if possible.
It all depends on who your target audience is. If you want people writing apps for OS X 10.5 or iOS 3.x, then you need to use delegates. Otherwise, go ahead and use blocks.
It's quite a subjective question since both are valid options, but Apple seems to be shifting further towards using blocks for "throw-away" methods.
The main question would be your target audience.
Block are limited to Snow Leopard (and IOS 4? cant remember).
If you want your framework to be usable by previous operating systems, you can't use blocks.
If you're happy with os limitations, then go with blocks and NSOperationQueue, it's really good and simple to use.
Better, you could offer both options..
I would recommend using blocks, and if you do it right, you can support 10.5 at the same time.
Check out the open-source PLBlocks runtime, it allows you to seamlessly use blocks on both 10.5 and 10.6.

What would a multithreaded UI api look like, and what advantages would it provide?

Or, equivalently, how would you design such an API. Expected/example usage would be illustrative as well.
My curiosity comes directly from the comments (and subsequent editting on my part) of this answer. Similar questions/discussions in the past provide a bit of inspiration to actually asking it.
Executive summary:
I don't feel a multithreaded UI api is possible in a meaningful way, nor particularly desirable. This view seems somewhat contentious and being a (relatively) humble man I'd like to see the error of my ways, if they actually are erroneous.
*Multithreaded is defined pretty loosely in this context, treat** it however makes sense to you.
Since this is pretty free-form, I'll be accepting whichever answer has the most coherent and well supported answer in my opinion; regardless of whether I agree with it.
Answer Accepted
**Ok, perhaps more clarification is necessary.
Pretty much every serious application has more than one thread. At the very least, they'll spin up an additional thread to do some background task in response to a UI event.
I do not consider this a multithreaded UI.
All the UI work is being done on single thread still. I'd say, at a basic level, a multithreaded UI api would have to do away with (in some way) thread based ownership of UI objects or dispatching events to a single thread.
Remeber, this is about the UI api itself; not the applications that makes use of it.
I don't see how a multithreaded UI API would differ much from existing ones. The major differences would be:
(If using a non-GC'd language like C++) Object lifetimes are tracked by reference-counted pointer wrappers such as std::tr1::shared_ptr. This ensures you don't race with a thread trying to delete an object.
All methods are reentrant, thread-safe, and guaranteed not to block on event callbacks (therefore, event callbacks shall not be invoked while holding locks)
A total order on locks would need to be specified; for example, the implementation of a method on a control would only be allowed to invoke methods on child controls, except by scheduling an asynchronous callback to run later or on another thread.
With those two changes, you can apply this to almost any GUI framework you like. There's not really a need for massive changes; however, the additional locking overhead will slow it down, and the restrictions on lock ordering will make designing custom controls somewhat more complex.
Since this usually is a lot more trouble than it's worth, most GUI frameworks strike a middle ground; UI objects can generally only be manipulated from the UI thread (some systems, such as win32, allow there to be multiple UI threads with seperate UI objects), and to communicate between threads there is a threadsafe method to schedule a callback to be invoked on the UI thread.
Most GUI's are multithreaded, at least in the sense that the GUI is running in a separate thread from the rest of the application, and often one more thread for an event handler. This has the obvious benefit of complicated backend work and synchronous IO not bringing the GUI to a screeching halt, and vice versa.
Adding more threads tends to be a proposition of diminishing returns, unless you're handling things like multi-touch or multi-user. However, most multi-touch input seems to be handled threaded at the driver level, so there's usually no need for it at the GUI level. For the most part you only need 1:1 thread to user ratio plus some constant number depending on what exactly you're doing.
For example, pre-caching threads are popular. The thread can burn any extra CPU cycles doing predictive caching, to make things run faster in general. Animation threads... If you have intensive animations, but you want to maintain responsiveness you can put the animation in a lower priority thread than the rest of the UI. Event handler threads are also popular, as mentioned above, but are usually provided transparently to the users of the framework.
So there are definitely uses for threads, but there's no point in spawning large numbers of threads for a GUI. However, if you were writing your own GUI framework you would definitely have to implement it using a threaded model.
There is nothing wrong with, nor particularly special about multithreaded ui apps. All you need is some sort of synchronization between threads and a way to update the ui across thread boundaries (BeginInvoke in C#, SendMessage in a plain Win32 app, etc).
As for uses, pretty much everything you see is multithreaded, from Internet Browsers (they have background threads downloading files while a main thread is taking care of displaying the parts downloaded - again, making use of heavy synchronization) to Office apps (the save function in Microsoft Office comes to mind) to games (good luck finding a single threaded big name game). In fact the C# WinForms UI spawns a new thread for the UI out of the box!
What specifically do you think is not desirable or hard to implement about it?
I don't see any benifit really. Let's say the average app has 3 primary goals:
Rendering
User input / event handlers
Number crunching / Network / Disk / Etc
Dividing these into one thread each(several for #3) would be pretty logical and I would call #1 and #2 UI.
You could say that #1 is already multithreaded and divided on tons of shader-processors on the GPU. I don't know if adding more threads on the CPU would help really. (at least if you are using standard shaders, IIRC some software ray tracers and other CGI renderers use several threads - but i would put such applications under #3)
The user input metods, #2, should only be really really short, and invoke stuff from #3 if more time is needed, that adding more threads here wouldn't be of any use.