this image comes from Practical usage of setjmp and longjmp in C.
From my understanding, the coroutine is two process looks like doing parallelly for human but actually doing a single process for machine.
But using setjmp & longjmp I feel very hard to read the code. If need to write the same one. For example process A & B, I will give serval States to the processes to split them into different pieces(states),
do sequentially like:
Process A
switch (state)
case A1:
if (A1 is done)
do B1
break;
...
Process B
switch (state)
case B1:
if (B1 is done)
do A2
break;
...
I need a reason to support me use setjmp & longjmp & coroutine in C/C++.
What's advantage?
setjmp/longjmp() is rarely used by nowaday programmer. Instead, you should use more powerful boost::coroutine, or my QtNetworkNg. Coroutine is used widely, and mostly, in network programming.
I am the author of QtNetworkNg which provides a stackful coroutine implementation. In the document of QtNetworkNg, I wrote:
The traditional network programming use threads. send()/recv() is blocked, and then the Operating System switch current thread to another ready thread until data arrived. This is very straightforward, and easy for network programming. But threads use heavy resources, thousands of connections may consume many memory. More worst, threads cause data races, data currupt, even crashes.
Coroutine-based paradigm is the now and feature of network programming. Coroutine is light-weight thread which has its own stack, not managed by Operating System but QtNetworkNg. Like thread-based paradigm, send()/recv() is blocked, but switch to another coroutine in the same thread unitl data arrived. Many coroutines can be created at low cost. Because there is only one thread, no locks or other synchoronization is needed. The API is straightforward like thread-based paradigm, but avoid the complexities of using threads.
Besides of that, coroutines can handle state machine and timeline more cleanly. Some online game server use coroutine to handle the interacting between thousands of peers.
Related
In objective-c, there are (at least) two approaches to synchronizing concurrent accesses to a shared resource. The older lock-based approach and the newer approach with Grand Central Dispatch (GCD), for the latter one using dispatch_sync to dispatch all accesses to a shared queue.
In the Concurrency Programming Guide, section Eliminating Lock-Based Code, it is stated that "the use of locks comes at a cost. Even in the non-contested case, there is always a performance penalty associated with taking a lock."
Is this a valid argument for the GCD approach?
I think it's not for the following reason:
A queue must have a list of queued tasks to do. One ore more threads can add tasks to this list via dispatch_sync and one or more worker threads need to remove elements from this list in order to execute the tasks. This must be guarded by a lock. So a lock needs to be taken there as well.
Please tell me if there is any other way how queues can do this without a lock that I'm not aware of.
UPDATE: Further on in the guide, it is implied that there is something I'm not aware of: "queueing a task does not require trapping into the kernel to acquire a mutex."
How does that work?
On current releases of OS X and iOS, both pthread mutexes and GCD queues (as well as GCD semaphores) are implemented purely in userspace without needing to trap into the kernel, except when there is contention (i.e. a thread blocking in the kernel waiting for an "unlock").
The conceptual advantage of GCD queues over locks is more about them being able to be used asynchronously, the asynchronous execution of a "locked" critical section on a queue does not involve any waiting.
If you are just replacing locks with calls to dispatch_sync you are not really taking full advantage of the features of GCD (though the implementation of dispatch_sync happens to be slightly more efficient mainly due to pthread mutexes having to satisfy additional constraints).
There exist lock free queuing implementations. One reason they are often pooh-poohed is that they are platform specific, since they rely on the processors atomic operations (like increment, decrement, compare-and-swap, etc) and the exact implementation of those will vary from one CPU architecture to another. Since Apple is both the OS and hardware vendor, this criticism is far less of an issue for Apple platforms.
The implication from the documentation is that GCD queue management uses one of these lock-free queues to achieve thread safety without trapping into the kernel.
For more information about one possible MacOS/iOS lock-free queue implementation, read here about these functions:
void OSAtomicEnqueue( OSQueueHead *__list, void *__new, size_t __offset);
void* OSAtomicDequeue( OSQueueHead *__list, size_t __offset);
It's worth mentioning here that GCD has been (mostly) open-sourced, so if you're truly curious about the implementation of it's queues, go forth and use the source, Luke.
A colleague suggested recently that I use pthreads instead of GCD because it's, "way faster." I don't disagree that it's faster, but what's the risk with pthreads?
My feeling is that they will ultimately not be anywhere nearly as idiot-proof as GCD (and my team of one is 50% idiots). Are pthreads hard to get right?
GCD and pthreads are both ways of doing work asynchronously, but they are significantly different. Most descriptions of GCD describe it in terms of threads and of thread pooling, but as DrPizza puts it
to concentrate on [threads and thread pools] is to miss the point. GCD’s value lies not in thread pooling, but in queuing.
Grand Central Dispatch for Win32: why I want it
GCD has some nice benefits over APIs like pthreads.
GCD does more to encourage and support "islands of serialization in a sea of parallelism." GCD makes it easy to avoid a lot of locks and mutexes and condition variables that are the normal way of comunicating between threads. This is because you decompose your program into tasks and GCD handles getting the task input and output to the appropriate thread behind the scenes. So programming with GCD allows you to pretty much write serially and not worry too much about stuff people often worry about in threaded code. That makes the code simpler and less bug prone.
GCD can do scaling for you so the program uses as much parallelism as the dependencies between the tasks you've decomposed your program into and the hardware allow for. Of course designing the program to be scalable is generally the hard bit, but you'll still need something to actually take advantage of that work to run as much as possible in parallel. Work stealing schedulers like GCD do that part.
GCD is composable. If you explicitly spawn threads for things you want to do asynchronously or in parallel you can run into a problem when libraries you use do the same thing. Say you decide you can run eight threads simultaneously because that's how many threads will be effective for your program given the machine it runs on. And then say a library you use on each thread does the same thing. Now you could have up to 64 threads running at once, which is more than you know is effective for your program.
Thread pooling solves this but everyone needs to use the same thread pool. GCD uses thread pooling internally and provides the same pool to everyone.
GCD provides a bunch of 'sources' and makes it easy to write an event driven program that depends on or takes input from the sources. For example you can very easily have a queue set up to launch a task every time data is available to read on a network socket, or when a timer fires, or whatever.
I don't think they're hard to get right, but having worked with many different approaches over the years (pthreads, GCD, NSThread, NSOperationQueue, etc.) I have no evidence to support an assertion like "pthreads are way faster." Even if they were faster (and I would expect the difference to be marginal at best) I always say, "use the highest level abstraction that gets the job done." Also, avoid pre-mature optimization.
Anecdotally speaking, GCD is pretty damn fast. How I see it, portability is the primary advantage of pthreads over GCD. If this is OSX/iOS exclusive code, I would see no advantage whatsoever to using pthreads, absent empirical evidence to the contrary.
Ignore the other well thought technical reasons, because they aren't relevant. You are not writing software for a benchmark, are you? At some point, a user is going to sit in front of your device and try to use it. And do you know what happens if you use pthreads instead of GCD? What happens is that your software doesn't scale well in the presence of other software multitasking at the same time because it is going to fight for the CPU presuming it is the only software running at the same time. Which is crazy. Nobody runs single task OSes any more. Even single task iOS runs much stuff in the background.
Instead, if all the programs you were running used GCD, the OS can scale the number of concurrent tasks running on their queues and thus match better the number of actual processors, reducing task switching overhead.
If your program doesn't require pseudo real time low latency and thus a dedicated thread to process stuff as soon as it is available (maybe the definition of your colleague's "way faster"), chances are GCD will be superior for the user because it will use better the resources available on their device. Even if GCD's API was horrible or slow it would be worthwhile to use it over other solutions which don't scale across different processes.
Probably NSThread is implemented using the pthreads library, the point is that the lower is the level of a concept, the more you have to do useless and repetitive tasks.
So the pthreads library isn't so hard to learn, my professor at university taught it, and even the most (call 'em so) slow at learning people were able to use the library, maybe randomly copying-pasting the code just for lazily but doing the job successfully.
So I definitely suggest you to implement a pthread wrapper class, it's easy to do it.
This way you eliminate the useless stuff, for example you may be doing this thousand of times:
pthread_mutex_init( mutex_ptr, NULL);
So (if that's your case, but it's just an example) you may be passing always NULL, and the same is valid for other functions.
Once implemented the class it isn't said that is faster than GCD.
GCD do some optimizations, for example two blocks may be ran in the same thread.
So I suggest to use your defined class only if it's faster than GCD, to test it with time profiler.
I searched a variety of sources but don't really understand the difference between using NSThreads and GCD. I'm completely new to the OS X platform so I might be completely misinterpreting this.
From what I read online, GCD seems to do the exact same thing as basic threads (POSIX, NSThreads etc.) while adding much more technical jargon ("blocks"). It seems to just overcomplicate the basic thread creation system (create thread, run function).
What exactly is GCD and why would it ever be preferred over traditional threading? When should traditional threads be used rather than GCD? And finally is there a reason for GCD's strange syntax? ("blocks" instead of simply calling functions).
I am on Mac OS X 10.6.8 Snow Leopard and I am not programming for iOS - I am programming for Macs. I am using Xcode 3.6.8 in Cocoa, creating a GUI application.
Advantages of Dispatch
The advantages of dispatch are mostly outlined here:
Migrating Away from Threads
The idea is that you eliminate work on your part, since the paradigm fits MOST code more easily.
It reduces the memory penalty your application pays for storing thread stacks in the application’s memory space.
It eliminates the code needed to create and configure your threads.
It eliminates the code needed to manage and schedule work on threads.
It simplifies the code you have to write.
Empirically, using GCD-type locking instead of #synchronized is about 80% faster or more, though micro-benchmarks may be deceiving. Read more here, though I think the advice to go async with writes does not apply in many cases, and it's slower (but it's asynchronous).
Advantages of Threads
Why would you continue to use Threads? From the same document:
It is important to remember that queues are not a panacea for
replacing threads. The asynchronous programming model offered by
queues is appropriate in situations where latency is not an issue.
Even though queues offer ways to configure the execution priority of
tasks in the queue, higher execution priorities do not guarantee the
execution of tasks at specific times. Therefore, threads are still a
more appropriate choice in cases where you need minimal latency, such
as in audio and video playback.
Another place where I haven't personally found an ideal solution using queues is daemon processes that need to be constantly rescheduled. Not that you cannot reschedule them, but looping within a NSThread method is simpler (I think). Edit: Now I'm convinced that even in this context, GCD-style locking would be faster, and you could also do a loop within a GCD-dispatched operation.
Blocks in Objective-C?
Blocks are really horrible in Objective-C due to the awful syntax (though Xcode can sometimes help with autocompletion, at least). If you look at blocks in Ruby (or any other language, pretty much) you'll see how simple and elegant they are for dispatching operations. I'd say that you'll get used to the Objective-C syntax, but I really think that you'll get used to copying from your examples a lot :)
You might find my examples from here to be helpful, or just distracting. Not sure.
While the answers so far are about the context of threads vs GCD inside the domain of a single application and the differences it has for programming, the reason you should always prefer GCD is because of multitasking environments (since you are on MacOSX and not iOS). Threads are ok if your application is running alone on your machine. Say, you have a video edition program and want to apply some effect to the video. The render is going to take 10 minutes on a machine with eight cores. Fine.
Now, while the video app is churning in the background, you open an image edition program and play with some high resolution image, decide to apply some special image filter and your image application being clever detects you have eight cores and starts eight threads to process the image. Nice isn't it? Except that's terrible for performance. The image edition app doesn't know anything about the video app (and vice versa) and therefore both will request their respectively optimum number of threads. And there will be pain and blood while the cores try to switch from one thread to another, because to avoid starvation the CPU will eventually let all threads run, even though in this situation it would be more optimal to run only 4 threads for the video app and 4 threads for the image app.
For a more detailed reference, take a look at http://deusty.blogspot.com/2010/11/introducing-gcd-based-cocoahttpserver.html where you can see a benchmark of an HTTP server using GCD versus thread, and see how it scales. Once you understand the problem threads have for multicore machines in multi-app environments, you will always want to use GCD, simply because threads are not always optimal, while GCD potentially can be since the OS can scale thread usage per app depending on load.
Please, remember we won't have more GHz in our machines any time soon. From now on we will only have more cores, so it's your duty to use the best tool for this environment, and that is GCD.
Blocks allow for passing a block of code to execute. Once you get past the "strange syntax", they are quite powerful.
GCD also uses queues which if used properly can help with lock free concurrency if the code executing in the separate queues are isolated. It's a simpler way to offer background and concurrency while minimizing the chance for deadlocks (if used right).
The "strange syntax" is because they chose the caret (^) because it was one of the few symbols that wasn't overloaded as an operator in C++
See:
https://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
When it comes to adding concurrency to an application, dispatch queues
provide several advantages over threads. The most direct advantage is
the simplicity of the work-queue programming model. With threads, you
have to write code both for the work you want to perform and for the
creation and management of the threads themselves. Dispatch queues let
you focus on the work you actually want to perform without having to
worry about the thread creation and management. Instead, the system
handles all of the thread creation and management for you. The
advantage is that the system is able to manage threads much more
efficiently than any single application ever could. The system can
scale the number of threads dynamically based on the available
resources and current system conditions. In addition, the system is
usually able to start running your task more quickly than you could if
you created the thread yourself.
Although you might think rewriting your code for dispatch queues would
be difficult, it is often easier to write code for dispatch queues
than it is to write code for threads. The key to writing your code is
to design tasks that are self-contained and able to run
asynchronously. (This is actually true for both threads and dispatch
queues.)
...
Although you would be right to point out that two tasks running in a
serial queue do not run concurrently, you have to remember that if two
threads take a lock at the same time, any concurrency offered by the
threads is lost or significantly reduced. More importantly, the
threaded model requires the creation of two threads, which take up
both kernel and user-space memory. Dispatch queues do not pay the same
memory penalty for their threads, and the threads they do use are kept
busy and not blocked.
GCD (Grand Central Dispatch): GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Work submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes. Why GCD over threads :
How much work your CPU cores are doing
How many CPU cores you have.
How much threads should be spawned.
If GCD needs it can go down into the kernel and communicate about resources, thus better scheduling.
Less load on kernel and better sync with OS
GCD uses existing threads from thread pool instead of creating and then destroying.
Best advantage of the system’s hardware resources, while allowing the operating system to balance the load of all the programs currently running along with considerations like heating and battery life.
I have shared my experience with threads, operating system and GCD AT http://iosdose.com
Is it safe? For instance, if I create a bunch of different GCD queues that each compress (tar cvzf) some files, am I doing something wrong? Will the hard drive be destroyed?
Or does the system properly take care of such things?
Dietrich's answer is correct save for one detail (that is completely non-obvious).
If you were to spin off, say, 100 asynchronous tar executions via GCD, you'd quickly find that you have 100 threads running in your application (which would also be dead slow due to gross abuse of the I/O subsystem).
In a fully asynchronous concurrent system with queues, there is no way to know if a particular unit of work is blocked because it is waiting for a system resource or waiting for some other enqueued unit of work. Therefore, anytime anything blocks, you pretty much have to spin up another thread and consume another unit of work or risk locking up the application.
In such a case, the "obvious" solution is to wait a bit when a unit of work blocks before spinning up another thread to de-queue and process another unit of work with the hope that the first unit of work "unblocks" and continues processing.
Doing so, though, would mean that any asynchronous concurrent system with interaction between units of work -- a common case -- would be so slow as to be useless.
Far more effective is to limit the # of units of work that are enqueued in the global asynchronous queues at any one time. A GCD semaphore makes this quite easy; you have a single serial queue into which all units of work are enqueued. Every time you dequeue a unit of work, you increment the semaphore. Every time a unit of work is completed, you decrement the semaphore. As long as the semaphore is below some maximum value (say, 4), then you enqueue a new unit of work.
If you take something that is normally IO limited, such as tar, and run a bunch of copies in GCD,
It will run more slowly because you are throwing more CPU at an IO-bound task, meaning the IO will be more scattered and there will be more of it at the same time,
No more than N tasks will run at a time, which is the point of GCD, so "a billion queue entries" and "ten queue entries" give you the same thing if you have less than 10 threads,
Your hard drive will be fine.
Even though this question was asked back in May, it's still worth noting that GCD has now provided I/O primitives with the release of 10.7 (OS X Lion). See the man pages for dispatch_read and dispatch_io_create for examples on how to do efficient I/O with the new APIs. They are smart enough to properly schedule I/O against a single disk (or multiple disks) with knowledge of how much concurrency is, or is not, possible in the actual I/O requests.
What is the difference between a thread/process/task?
Process:
A process is an instance of a computer program that is being executed.
It contains the program code and its current activity.
Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
Process-based multitasking enables you to run the Java compiler at the same time that you are using a text editor.
In employing multiple processes with a single CPU,context switching between various memory context is used.
Each process has a complete set of its own variables.
Thread:
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers.
A thread of execution results from a fork of a computer program into two or more concurrently running tasks.
The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
Example of threads in same process is automatic spell check and automatic saving of a file while writing.
Threads are basically processes that run in the same memory context.
Threads may share the same data while execution.
Thread Diagram i.e. single thread vs multiple threads
Task:
A task is a set of program instructions that are loaded in memory.
Short answer:
A thread is a scheduling concept, it's what the CPU actually 'runs' (you don't run a process). A process needs at least one thread that the CPU/OS executes.
A process is data organizational concept. Resources (e.g. memory for holding state, allowed address space, etc) are allocated for a process.
To explain on simpler terms
Process: process is the set of instruction as code which operates on related data and process has its own various state, sleeping, running, stopped etc. when program gets loaded into memory it becomes process. Each process has atleast one thread when CPU is allocated called sigled threaded program.
Thread: thread is a portion of the process. more than one thread can exist as part of process. Thread has its own program area and memory area. Multiple threads inside one process can not access each other data. Process has to handle sycnhronization of threads to achieve the desirable behaviour.
Task: Task is not widely concept used worldwide. when program instruction is loaded into memory people do call as process or task. Task and Process are synonyms nowadays.
A process invokes or initiates a program. It is an instance of a program that can be multiple and running the same application. A thread is the smallest unit of execution that lies within the process. A process can have multiple threads running. An execution of thread results in a task. Hence, in a multithreading environment, multithreading takes place.
A program in execution is known as process. A program can have any number of processes. Every process has its own address space.
Threads uses address spaces of the process. The difference between a thread and a process is, when the CPU switches from one process to another the current information needs to be saved in Process Descriptor and load the information of a new process. Switching from one thread to another is simple.
A task is simply a set of instructions loaded into the memory. Threads can themselves split themselves into two or more simultaneously running tasks.
for more Understanding refer the link: http://www.careerride.com/os-thread-process-and-task.aspx
Wikipedia sums it up quite nicely:
Threads compared with processes
Threads differ from traditional multitasking operating system processes in that:
processes are typically independent, while threads exist as
subsets of a process
processes carry considerable state information, whereas multiple
threads within a process share state
as well as memory and other resources
processes have separate address spaces, whereas threads share their
address space
processes interact only through system-provided inter-process
communication mechanisms.
Context switching between threads in the same process is
typically faster than context
switching between processes.
Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes; in other operating systems there is not so great a difference except the cost of address space switch which implies a TLB flush.
Task and process are used synonymously.
from wiki clear explanation
1:1 (Kernel-level threading)
Threads created by the user are in 1-1 correspondence with schedulable entities in the kernel.[3] This is the simplest possible threading implementation. Win32 used this approach from the start. On Linux, the usual C library implements this approach (via the NPTL or older LinuxThreads). The same approach is used by Solaris, NetBSD and FreeBSD.
N:1 (User-level threading)
An N:1 model implies that all application-level threads map to a single kernel-level scheduled entity;[3] the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks however is that it cannot benefit from the hardware acceleration on multi-threaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time.[3] For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be utilized. The GNU Portable Threads uses User-level threading, as does State Threads.
M:N (Hybrid threading)
M:N maps some M number of application threads onto some N number of kernel entities,[3] or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.