Let the GPU handle recursive algorithm - objective-c

I have a complex recursive algorithm that in it's php implementation runs about 15 minutes in the CLI to complete. I was thinking about porting it to objective-c and wanted to know who I can make use of the the GPU for the calculations. Is there a way to designate threads to be executed by the GPU?
Thanks

Yes, it's possible to use the GPU for calculations, although depending on the task it may not be advantageous. Without posting code it's anyone's guess what the most efficient means for your implementation might be. I would recommend reading the "Concurrency Programming Guide", for it's an excellent starting point in terms understanding the appropriate ways one might want to handle concurrent threading within Objective-C.

Related

How to write code which performs calculations using the GPU?

We hear a lot about how certain types of calculation can be completed much more quickly by a GPU than by a CPU, but as a programmer I would have no idea how to force a calculation to be run in this way. Can anyone give a high-level explanation of how this is done?
I am aware that there are libraries which will do this 'by magic' but I would like to understand what they are doing behind the scenes. I'm pretty sure there isn't a runOnGpu flag that you can pass to low-level system calls, so what techniques are available?
Need to rewrite your program. Use C++ AMP if you C++ or APARAPI if you Java.

Does VB.NET offer any performance improvement over VB6 for CPU-bound processes?

I'm working on a mathematical model written in VB6. The amount of CPU time this model is consuming is becoming a concern to some of our customers and the notion has been floated that porting it to VB.NET will improve its performance.
The model is performing a lot of single-precision arithmetic (a finite-difference scheme over a large grid) with small bursts of database access every five seconds or so (not enough to be important). Only basic arithmetic functions with occasional use of the ^ 4 operator are involved.
Does anyone think porting to VB.NET is likely to improve matters (or not)? Does anyone know any reliable articles or papers I can check over to help with this decision?
My opinion is that VB.Net won't improve performance by far. The improvement is given by your ability to make an optimized algorith.
Probably the best performance boost you can get is eliminating the DB access (even if it doesnt look important I/O usually is the bottleneck, not the language itself). If possible get the data upfront and save it at the end instead of accesing every 5 secs.
Also as other pointed out, change the algorithm if possible since porting the code to .NET probably will only get you small performance benefits.
But if you change it to .NET 4.0 maybe you can use the parallel extensions and really get a boost by using multiple cores. http://msdn.microsoft.com/en-us/library/dd460693.aspx , but that also means, changing the algorithm
Hope it helps. ;-)
I think that improvements in memory management improve performance in VB.NET
To provide you with a correct answer we should check your code...
But certainly VB.NET in theory should be more performant:
possibility to compile on 64 bit machines (without too much effort)
VB6 was interpreted, VB.NET is almost compiled
You can use threads (depends on your algorithm) and other "tricks", so you can use more CPUs to make calculations in parallel
Best thing to try: port the most CPU consuming part of your application to VB.NET and compare.
The same algorithm will perform faster in VB6, because it is compiled in native language.
If the program has extensive memory allocation, it may perform faster in a .NET when running in a 64 bit environment though.

What's the difference between code written for a desktop machine and a supercomputer?

Hypothetically speaking, if my scientific work was leading toward the development of functions/modules/subroutines (on a desktop), what would I need to know to incorporate it into a large-scale simulation to be run on a supercomputer (which might simulate molecules, fluids, reactions, and so on)?
My impression is that it has to do with taking advantage of certain libraries (e.g., BLAS, LAPLACK) where possible, revising algorithms (reducing iteration), profiling, parallelizing, considering memory-hard disk-processor use/access... I am aware of the adage, "want to optimize your code? don't do it", but if one were interested in learning about writing efficient code, what references might be available?
I think this question is language agnostic, but since many number-crunching packages for biomolecular simulation, climate modeling, etc. are written in some version of Fortran, this language would probably be my target of interest (and I have programmed rather extensively in Fortran 77).
Profiling is a must at any level of machinery. In common usage, I've found that scaling to larger and larger grids requires a better understanding of the grid software and the topology of the grid. In that sense, everything you learn about optimizing for one machine is still applicable, but understanding the grid software gets you additional mileage. Hadoop is one of the most popular and widespread grid systems, so learning about the scheduler options, interfaces (APIs and web interfaces), and other aspects of usage will help. Although you may not use Hadoop for a given supercomputer, it is one of the less painful methods for learning about distributed computing. For parallel computing, you may pursue MPI and other systems.
Additionally, learning to parallelize code on a single machine, across multiple cores or processors, is something you can begin learning on a desktop machine.
Recommendations:
Learn to optimize code on a single machine:
Learn profiling
Learn to use optimized libraries (after profiling: so that you see the speedup)
Be sure you know algorithms and data structures very well (*)
Learn to do embarrassingly parallel programming on multiple core machines.
Later: consider multithreaded programming. It's harder and may not pay off for your problem.
Learn about basic grid software for distributed processing
Learn about tools for parallel processing on a grid
Learn to program for alternative hardware, e.g. GPUs, various specialized computing systems.
This is language agnostic. I have had to learn the same sequence in multiple languages and multiple HPC systems. At each step, take a simpler route to learn some of the infrastructure and tools; e.g. learn multicore before multithreaded, distributed before parallel, so that you can see what fits for the hardware and problem, and what doesn't.
Some of the steps may be reordered depending on local computing practices, established codebases, and mentors. If you have a large GPU or MPI library in place, then, by all means, learn that rather than foist Hadoop onto your collaborators.
(*) The reason to know algorithms very well is that as soon as your code is running on a grid, others will see it. When it is hogging up the system, they will want to know what you're doing. If you are running a process that is polynomial and should be constant, you may find yourself mocked. Others with more domain expertise may help you find good approximations for NP-hard problems, but you should know that the concept exists.
Parallelization would be the key.
Since the problems you cited (e.g. CFD, multiphysics, mass transfer) are generally expressed as large-scale linear algebra problems, you need matrix routines that parallelize well. MPI is a standard for those types of problems.
Physics can influence as well. For example, it's possible to solve some elliptical problems efficiently using explicit dynamics and artificial mass and damping matricies.
3D multiphysics means coupled differential equations with varying time scales. You'll want a fine mesh to resolve details in both space and time, so the number of degrees of freedom will rise rapidly; time steps will be governed by the stability requirements of your problem.
If someone ever figures out how to run linear algebra as a map-reduce problem they'll have it knocked.
Hypothetically speaking, if my scientific work was leading toward the development of functions/modules/subroutines (on a desktop), what would I need to know to incorporate it into a large-scale simulation to be run on a supercomputer (which might simulate molecules, fluids, reactions, and so on)?
First, you would need to understand the problem. Not all problems can be solved in parallel (and I'm using the term parallel in as wide meaning as it can get). So, see how the problem is now solved. Can it be solved with some other method quicker. Can it be divided in independent parts ... and so on ...
Fortran is the language specialized for scientific computing, and during the recent years, along with the development of new language features, there has also been some very interesting development in terms of features that are aiming for this "market". The term "co-arrays" could be an interesting read.
But for now, I would suggest reading first into a book like Using OpenMP - OpenMP is a simpler model but the book (fortran examples inside) explains nicely the fundamentals. Message parsing interface (for friends, MPI :) is a larger model, and one of often used. Your next step from OpenMP should probably go in this direction. Books on the MPI programming are not rare.
You mentioned also libraries - yes, some of those you mentioned are widely used. Others are also available. A person who does not know exactly where the problem in performance lies should IMHO never try to undertake the task of trying to rewrite library routines.
Also there are books on parallel algorithms, you might want to check out.
I think this question is language agnostic, but since many number-crunching packages for biomolecular simulation, climate modeling, etc. are written in some version of Fortran, this language would probably be my target of interest (and I have programmed rather extensively in Fortran 77).
In short it comes down to understanding the problem, learning where the problem in performance is, re-solving the whole problem again with a different approach, iterating a few times, and by that time you'll already know what you're doing and where you're stuck.
We're in a position similar to yours.
I'm most in agreement with #Iterator's answer, but I think there's more to say.
First of all, I believe in "profiling" by the random-pausing method, because I'm not really interested in measuring things (It's easy enough to do that) but in pinpointing code that is causing time waste, so I can fix it. It's like the difference between a floodlight and a laser.
For one example, we use LAPACK and BLAS. Now, in taking my stack samples, I saw a lot of the samples were in the routine that compares characters. This was called from a general routine that multiplies and scales matrices, and that was called from our code. The matrix-manipulating routine, in order to be flexible, has character arguments that tell it things like, if a matrix is lower-triangular or whatever. In fact, if the matrices are not very large, the routine can spend more than 50% of its time just classifying the problem. Of course, the next time it is called from the same place, it does the same thing all over again. In a case like that, a special routine should be written. When it is optimized by the compiler, it will be as fast as it reasonably can be, and will save all that classifying time.
For another example, we use a variety of ODE solvers. These are optimized to the nth degree of course. They work by calling user-provided routines to calculate derivatives and possibly a jacobian matrix. If those user-provided routines don't actually do much, samples will indeed show the program counter in the ODE solver itself. However, if the user-provided routines do much more, samples will find the lower end of the stack in those routines mostly, because they take longer, while the ODE code takes roughly the same time. So, optimization should be concentrated in the user-provided routines, not the ODE code.
Once you've done several of the kind of optimization that is pinpointed by stack sampling, which can speed things up by 1-2 orders of magnitude, then by all means exploit parallelism, MPI, etc. if the problem allows it.

tips for efficient and optimized Cocoa applications

I am developing a cocoa application (Mac) and wanted to know what are your tips, best practices, ... for an efficient Cocoa application, which starts in less than 1 second and which is very responsive.
I've installed twitter for Mac and was amazed by its speed. Is it using special tricks?
Thanks in advance for your ideas :)
Three things that can help reduce startup time and improve overall performance are:
Defer loading resources until they're actually needed.
Profile your app to identify the parts that have the highest cost (whether you measure that in execution time, memory, or something else). Then work to reduce the cost of those operations or figure out a way to do them less or at a different time.
Take advantage of the hardware. Most machines these days have at least two processing cores and advanced graphics processors; use GCD, Quartz, Core Animation, and other technologies to take advantage of the available power.
I don't think there are really any "tricks" per se. You just profile your code with Instruments, and eliminate the slow areas. It's the same as optimising any code; don't block the main thread with disk reads/writes, use lazy loading where appropriate, etc.
A lot of it may be simply tightly written, good quality code. These sort of apps don't tend to rely on clunky frameworks etc.
Do only what you need to do and only when you need to do it.

Is sem_post/sem_wait significantly faster than pthread_mutex_lock/pthread_mutex_unlock?

I have a block of code that needs to run fast, right now I'm using pthread_mutex_lock/pthread_mutex_unlock to sync the threads but I saw that it has a certain impact on performance. I was wondering, if anyone ever benchmarked this, is sem_post/sem_wait significantly faster than pthread_mutex_lock/pthread_mutex_unlock?
Thanks!
I'd say a semaphore is probably slower than a mutex because a semaphore has a superset of the mutex behavior. You can try something at user level such as spin lock that runs without kernel support, but it all depends on the rate of lock/unlocks and the contention.
No, it is not significantly faster. They are implemented using same lower level primitives (read spin-locks and system calls). The real answer though would only be comparing both in your particular situation.
I would expect them to be roughly the same speed, but you could always benchmark it yourself if you really care. With that said, POSIX semaphores do have one, and as far as I'm concerned only one, advantage over the more sophisticated primitives like mutexes and condition variables: sem_post is required to be async-signal-safe. It is the only synchronization-related function which is async-signal-safe, and it makes it possible to perform minimal interaction between threads from a signal handler! - something which would otherwise be impossible without much heavier tools like pipes or SysV IPC which don't interact well with the performance-oriented pthread idioms.
Edit: For reference, the simplest implementation of pthread_mutex_trylock:
if (mutex->type==PTHREAD_MUTEX_DEFAULT) return atomic_swap(mutex->lock, EBUSY);
else /* lots of stuff to do */
and the simplest implementation of sem_trywait:
int val = sem->val;
return (val>0 && atomic_compare_and_swap(sem->val, val, val-1)==val) ? 0 : EAGAIN;
Assuming an optimal implementation, I would guess that a mutex lock might be slightly faster, but again, benchmark it.
If you're using Objective C your environment may be close enough to Cocoa to be able to use Grand Central Dispatch which would probably be even faster and definitely be even easier