CPU Cards for Parallel Computation? - process

I remember reading some time ago that there were cpu cards for systems to add additional processing power to do mass parallelization. Anyone have any experience on this and any resources to get looking into the hardware and software aspects of the project? Is this technology inferior to a traditional cluster? Is it more power conscious?

There are two cool options. one is the use of GPU's as Mitch mentions. The other is to get a PS/3, which has a multicore Cell processor.
You can also set up multiple inexpensive motherboard PCs and run Linux and Beowulf.

GPGPU is probably the most practical option for an enthusiast. However, DSPs are another option, such as those made by Texas Instruments, Freescale, Analog Devices, and NXP Semiconductors. Granted, most of those are probably targeted more towards industrial users, but you might look into the Storm-1 line of DSPs, some of which are supposed to go for as low as $60 a piece.
Another option for data parallelism are Physics Processing Units like the Nvidia (formerly Ageia) PhysX. The most obvious use of these coprocessors are for games, but they're also used for scientific modeling, cryptography, and other vector processing applications.
ClearSpeed Attached Processors are another possibility. These are basically SIMD co-processors designed for HPC applications, so they might be out of your price range, but I'm just guessing here.
All of these suggestions are based around data parallelism since I think that's the area with the most untapped potential. A lot of currently CPU-intensive applications could be performed much faster at much lower clock rates (and using less power) by simply taking advantage of vector processing and more specialized SIMD instruction sets.
Really, most computer users don't need more than an Intel Atom processor for the majority of their casual computing needs: e-mail, browsing the web, and playing music/video. And for the other 10% of computing tasks that actually do require lots of processing power, a general-purpose scalar processor typically isn't the best tool for the job anyway.
Even most people who do have serious processing needs only need it for a narrow range of applications; a physicist doesn't need a PC capable of playing the latest FPS; a sound engineer doesn't need to do scientific modeling or perform statistical analysis; and a graphic designer doesn't need to do digital signal processing. Domain-specific vector processors with highly specialized instruction sets (like modern GPUs for gaming) would be able to handle these tasks much more efficiently than a high power general-purpose CPU.
Cluster computing is no doubt very useful for a lot of high end industrial applications like nuclear research, but I think vector processing has much more practical uses for the average person.

Have you looked at the various GPU Computing options. Nvidia (and probably others) are offering personal supercomputers based around utilising the power of graphics cards.

OpenCL - is an industry wide standard for doing HPC computing across different vendors and processor types, single-core, multi-core, graphics cards, cell, etc... see http://en.wikipedia.org/wiki/OpenCL.
The idea is that using a simple code base you can use all spare processing capacity on the machine regardless of type of processor.
Apple has implemented this standard in its next version Mac OS X. There will also be offerings from nVIDIA, ATI, Intel etc.

Mercury Computing offers a Cell Accelerator Board, it's a PCIe card that has a Cell processor, and runs Yellow Dog Linux, or Mercury's flavor of YDL. Fixstars offers a more powerful Cell PCIe board called the GigaAccel. I called up Mercury, they said their board is about $5000 USD, without software. I'd guess the GigaAccel is up to twice as expensive.
I found one of the Mercury boards used, but it didn't come with a power cable, so I haven't been able to use it yet, sadly.

Related

Why do you need a Programmable Real Time Unit (PRU) while you can have an RTOS?

The beaglebone Black processor includes two independent Programmable Real Time Units (PRUs). Hobbyists and professionals are excited about possible use of these units for real-time applications, which is understood. However, if you can have a RTOS (whether for the beaglebone or the raspberry pi), why would you need the PRUs?
EDIT-
For information, the BBB has an ARM Cortex A8 running at 1 GHz, with 1.9 DMIPS / MHz. The PRUs are simple RISCs running at 200 MHz.
Linux, even with the real-time scheduler is unsuited to many critical hard real-time tasks with response requirements at the microsecond level, on the other hand it provides or enables a great deal of functionality in terms of UI, connectivity and filesystem support. These things are either not available in an RTOS or are provided at significant cost in high end RTOS, and with much more limited hardware support.
So if you have a system that has hard-real time constraints, but needs more general purpose computing features such a networking, filesystem connection to commercial-off-the-shelf (COTS) peripherals etc., then the PRU provides a solution to that.
On the other hand I can't help but think that this is a marketing exercise on the part of TI to sell more chips. A similar solution has always been possible (and indeed common) using one or more processors to perform time critical tasks, possibly running an RTOS, while UI and connectivity are handled by a single processor with the necessary hardware and memory resources but without the real-time constraints. The PRU device does have two 32 bit cores, but XMOS xCORE devices have as many as 16 cores - with 16 communicating cores, you may not even need an RTOS.
To answer the question...
[...] if you can have a RTOS [...], why would you need the PRUs?
... directly; you probably wouldn't need them in that case, but you would loose Linux - and your application may need that. It is just one of many solutions to real-time applications using Linux. You pays your money, and takes your choice.
Most likely the processor in BeagleBone or RaspberryPI is too "heavy" for real time - after all, you could run RTOS on your PC, but it will not be very responsive entirely deterministic, even when it's faster than your typical microcontroller (I guess that these PRUs are some sort of microcontrollers with a new fancy name). In such high-level application processor as found on these boards you rarely have direct access to hardware or interrupts, which are essential for real time applications that actually do something time-critical.

how can build single board computer like Raspberry Pi for run OS?

my question is : how can build single board computer like Raspberry Pi for run OS ?
user ARM micro processor and debian arm os , can use USB and etc.
like raspberry pi and other single board computer
i search but find nothing for help me !!! :(
The reason you can find nothing is probably because it is a specialist task undertaken by companies with appropriate resources in terms of expertise, equipment, tools and money.
High-end microprocessors capable of running an OS such as Linux use high-pin-density surface mount packages such as BGA or TQFP, these (especially BGA) require specialist equipment to manufacture and cannot reliably or realistically be assembled by hand. The pin count and density necessitates the use of multi-layer boards, these again require specialist manufacture.
What you would have to do if you wanted your own board, is to design your board, source the components, and then have it manufactured by a contract electronics assembly house. Short runs and one-off's will cost you may times that of just buying a COTS development or application board. It is only cost-effective if you are ultimately manufacturing a product that will sell in high volumes. It is only these volumes that make the RPi so inexpensive (and until recently Chinese manufacture).
Even if you designed and had your own board built, that in itself requires specialist knowledge and skill. The bus speeds on such processors require very specific layout to maintain signal integrity and timing and to avoid EMC problems. The cost of suitable schematic capture and board layout software might also be prohibitive, no doubt there are some reasonably capable open source tools - but you will have to find one that generates output your manufacturer can use to set-up their machinery.
Some lower-end 8 bit microcontrollers with low pin count are suitable for hand soldering or even DIP socketing, using a bread-board or prototyping board, but that is not what you are after.
[Further thoughts added 14 Sep 2012]
This is probably only worth doing if one or more of the following are true:
Your aim is to gain experience in board design, manufacture and bring-up as an academic or career development exercise and you have the necessary financial resources.
You envisage high production volumes where the economies of scale make it less expensive than a COTS board.
You have product requirements for specific features or form-factor not supported by COTS boards.
You have restricted product requirements where a custom board tailored to those and having no redundant features might, in sufficient volumes be cost-effective.
Note that COTS boards come in two types: Application modules intended for integration in a larger system or product, and development boards that tend to have a wide range of peripherals, switches, indicators and connectivity options and often a prototyping area for your own use.
I know this is an old question, but I've been looking into the same thing, possibly for different reasons, and it now comes up at the top of a google search providing more reasons not to ask or even look into it than it provides answers.
For an overview of what it takes to build a linux running board from scratch this link is incredibly useful:
http://hforsten.com/making-embedded-linux-computer.html
It details:
The bare minimum you need in terms of hardware ( ARM processor, NAND flash etc )
The complexities of getting a board designed
The process of programming the new chip on the board to include bootloaders and then pointing them to a linux kernel for the chip to boot.
Whether the OP wishes to pursue every or just some of these challenges, it is useful to know what the challenges are.
And these won't be all of them, adding displays, graphics and other hardware and interfaces is not covered, but this is a start.
Single board computers(SBC) are expected to take more load than normal hobby board and so it has slightly complicated structure in terms of PCB and components. You should be ready to work with BGA packages. Almost all of processors in SBCs are BGA (no DIP/QAFP). Here is the best blogpost that I recently came across. Its very nicely designed and fabricated board running Linux on ARM processor. Author has really done a great job at designing as well as documenting the process. I hope it helps you to understand both hardware and software side of SBCs.
A lot of answers are discouraging. But, I would say you can do it, as I have done it already with imx233. Its not easy, its not a weekend project. My project link is MyIMX233.
It took me about 4-5months
It didn't cost me much, a small fine tip soldering iron is what I used.
The hard part is learning to design PCB.
Next task would be to find a PCB manufacturer with good enough precision, and prototyping price.
Next task would be to source components.
You may not get it right, I got the PCB right by my 3rd iteration. After that I was able to repeatedly produce 3 more boards all of which worked fine.
PCB Design - I used opensource KiCAD. You need to take care in doing impedance matching between RAM and processor buses, and some other high speed buses. I managed to do it in 2 layer board with 5mil/5mil trace space.
Component Sourcing - I got imx233 LQFP once via mouser, and once via element14.
RAM - 64MB tssop.
Soldering - I can say its easy to mess up here, but key is patience. And one caution don't use frying pan and solder past to do reflow soldering. I literally fried my first 2 processors like this. Even hot air soldering by a mobile repair shop was also not good enough.
Boot loading image - I didn't take much chance here, just went with Archlinux image by olimex.
If you want to skip the trouble of circuit designing between RAM & processor, skip imx233 and go for Allwinner V3S. In 2017/2018 this would be easiest approach.
Bottom line is I am a software engineer by profession, and if I can do it, then you can do it.
Why not using an FPGA board?
Something with Zynq like the Zybo board or from Altera like the DE0-Nano SoCKit.
There you already have the ARM core, memory, etc... plus the possibility to add the logic you miss.

What's the best description for "embedded hardware system"?

When I hear that, I always think about an mobile device. But why is the hardware "embedded" there? Isn't the whole device the hardware? Why is a personal computer no embedded hardware system?
In today's world embedded simply refers to a system with one or more of the following traits:
Single purpose (ie, not a general purpose computer, like your desktop)
Firmware rather than software - still software, but not as easily updated (flash, etc)
Hardware and software are designed together as a unit
Different, perhaps more rigorous testing as software updates are not desired
Real time computing
Memory integrated on the CPU
Microcontroller rather than microprocessor
Expected high reliability (you shouldn't have to reboot your dishwasher or microwave)
If it runs a program, but doesn't look like a computer, it's an embedded system.
That's my standard answer for friends and family. There's too many different types of embedded systems to get more specific.
I worked in the "embedded" area for a while and we considered anything that we had to write custom code for the hardware to be embedded.
If you have to work around the memory structure, write custom device drivers and anything that sits "directly on the metal" is generally "embedded".
If you're debugging it via a serial port - it's embedded.
It is called "embedded" because the computer is embedded as part of a larger device.
There is a very wide range of embedded systems.
At the low end are 8-pin PICs, for example there is a 12F629 in these diode lights. These costs cents and have very little memory.
The NXT by LEGO contains two controllers, a relatively big AT91SAM7S256 with a 32-bit ARM core, 256KB of flash ROM and 64KB of RAM, and a smaller 8-bit ATmega48 with 4KB of flash.
Currently I'm working on embedded systems for trains, these typically have a PowerPC with some hundreds of MHz clock, on the order of a hundred MB of RAM, run VxWorks or Linux and are connected by Ethernet.
I think there are still more powerful embedded systems for telecommunications, but I haven't worked on these.
As per Wikipedia:
An embedded system is a
special-purpose computer system designed to perform one or a few
dedicated functions, often with
real-time computing constraints. It is
usually embedded as part of a complete
device including hardware and
mechanical parts. In contrast, a
general-purpose computer, such as a
personal computer, can do many
different tasks depending on
programming.
Embedded systems are designed to do some specific task, rather than be a
general-purpose computer for multiple
tasks. Some also have real-time
performance constraints that must be
met, for reasons such as safety and
usability; others may have low or no
performance requirements, allowing the
system hardware to be simplified to
reduce costs.
Embedded systems are not always standalone devices. Many embedded
systems consist of small, computerized
parts within a larger device that
serves a more general purpose. For
example, the Gibson Robot Guitar
features an embedded system for tuning
the strings, but the overall purpose
of the Robot Guitar is, of course, to
play music.[2] Similarly, an embedded
system in an automobile provides a
specific function as a subsystem of
the car itself.
The program instructions written for embedded systems are referred to as
firmware, and are stored in read-only
memory or Flash memory chips. They run
with limited computer hardware
resources: little memory, small or
non-existent keyboard and/or screen.
From personal experience, if it's "headless" (i.e. doesn't have an output device like a VDU and relies on something like LED's), if there is a serial port used mainly for debugging and logging and if you often use a logic analyser for debugging, it's embedded.
"Embedded" has become a very diverse term.
I've seen and worked on designs that:
Simply toggled discrete I/O (including LEDs) at fixed intervals
Drivers for hardware solutions (e.g. webcams, wireless com)
Acted as communications translators for board-level I/O (SPI<->I2C<->Rs232<->USB)
[ insert multitude of appliances here ]
Human-controlled electronics (calculator-esque, phone-esque)
System level devices to coordinate actions of other devices.
I also like Dour-High-Arch's comment above:
"Another important difference is that embedded apps may run for years without intervention..."
"Embedded system" is a very broad term and I don't think that it is easy to have a single definition. The word "embedded" actually refers to an industry and not to a "hardware system". The description of embedded systems has changed over the years and it is definitely going to change in the future too.
In early days one would say the embedded systems were only programmed in assembly, but now C is common place and perhaps in the future other languages are used as well. CPUs are getting bigger and bigger, external memories are used all the time and they are many devices considered to be embedded that are not dedicated to a single task, applications can be added to them and the software is easily updated. Watches, gadgets, house appliances, automotive devices, PLCs, motor controllers, weather stations, system monitoring devices are all considered embedded. It is difficult to singe define them all.

Multi core programming

I want to get into multi core programming (not language specific) and wondered what hardware could be recommended for exploring this field.
My aim is to upgrade my existing desktop.
If at all possible, I would suggest getting a dual-socket machine, preferably with quad-core chips. You can certainly get a single-socket machine, but dual-socket would let you start seeing some of the effects of NUMA memory that are going to be exacerbated as the core counts get higher and higher.
Why do you care? There are two huge problems facing multi-core developers right now:
The programming model Parallel programming is hard, and there is (currently) no getting around this. A quad-core system will let you start playing around with real concurrency and all of the popular paradigms (threads, UPC, MPI, OpenMP, etc).
Memory Whenever you start having multiple threads, there is going to be contention for resources, and the memory wall is growing larger and larger. A recent article at arstechnica outlines some (very preliminary) research at Sandia that shows just how bad this might become if current trends continue. Multicore machines are going to have to keep everything fed, and this will require that people be intimately familiar with their memory system. Dual-socket adds NUMA to the mix (at least on AMD machines), which should get you started down this difficult road.
If you're interested in more info on performance inconsistencies with multi-socket machines, you might also check out this technical report on the subject.
Also, others have suggested getting a system with a CUDA-capable GPU, which I think is also a great way to get into multithreaded programming. It's lower level than the stuff I mentioned above, but throw one of those on your machine if you can. The new Portland Group compilers have provisional support for optimizing loops with CUDA, so you could play around with your GPU even if you don't want to learn CUDA yourself.
Quad-core, because it'll permit you to do problems where the number of concurrent processes is > 2, which often non-trivializes problems.
I would also, for sheer geek squee, pick up a nice NVidia card and use the CUDA API. If you have the bucks, there's a stand-alone CUDA workstation that plugs into your main computer via a cable and an expansion slot.
It depends what you want to do.
If you want to learn the basics of multithreaded programming, then you can do that on your existing single-core PC. (If you have 2 threads, then the OS will switch between them on a single-core PC. Then when you move to a dual-core PC they should automatically run in parallel on separate cores, for a 2x speedup). This has the advantage of being free! The disadvantages are that you won't see a speedup (in fact a parallel implementation is probably slightly slower due to overheads), and that buggy code has a slightly higher chance of working.
However, although you can learn multithreaded programming on a single-core box, a dual-core (or even HyperThreading) CPU would be a great help.
If you want to really stress-test the code you're writing, then as "blue tuxedo" says, you should go for as many cores as you can easily afford, and if possible get hyperthreading too.
If you want to learn about algorithms for running on graphics cards - which is a very different area to x86 multicore - then get CUDA and buy a normal nVidia graphics card that supports it.
I'd recommend at least a quad-core processor.
You could try tinkering with CUDA. It's free, not that hard to use and will run on any recent NVIDIA card.
Alternatively, you could get a PlayStation 3 and the Linux SDK and work out how to program a Cell processor. Note that the next cheapest option for Cell BE development is an order of magnitude more expensive than a PS3.
Finally, any modern motherboard that will take a Core Quad or quad-core Opteron (get a good one from Asus or some other reputable manufacturer) will let you experiment with a multi-core PC system for a reasonable sum of money.
The difficult thing with multithreaded/core programming is that it opens a whole new can of worms. The bugs you'll be faced with are usually not the one you're used to. Race conditions can remain dormant for ages until they bite and your mainstream language compiler won't assist you in any way. You'll get random data and/or crashes that only happen once a day/week/month/year, usually under the most mysterious conditions...
One things remains true fortunately : the higher the concurrency exhibited by a computer, the more race conditions you'll unveil.
So if you're serious about multithreaded/core programming, then go for as many cpu cores as possible. Keep in mind that neither hyperthreading nor SMT allow for the level of concurrency that multiple cores provide.
I would agree that, depending on what you ultimately want to do, you can probably get by with just your current single-core system. Multi-core programming is basically multi-threaded programming, and you can certainly do that on a single-core chip.
When I was a student, one of our projects was to build a thread-safe implementation the malloc library for C. Even on a single core processor, that was more than enough to cure me of my desire to get into multi-threaded programming. I would try something small like that before you start thinking about spending lots of money.
I agree with the others where I would upgrade to a quad-core processor. I am also a BIG FAN of ASUS Motherboards (the P5Q Pro is excellent for Core2Quad and Core2Duo processors)!
The draw for multi-core programming is that you have more resources to get things done faster. If you are serious about multi-core programming, then I would absolutely get a quad-core processor. I don't believe that you should get the new i7 architecture from Intel to take advantage of multi-core processing because anything written to take advantage of the Core2Duo or Core2Quad will just run better on the newer architecture.
If you are going to dabble in multi-core programming, then I would get a good Core2Duo processor. Remember, it's not just how many cores you have, but also how FAST the cores are to process the jobs. My Core2Duo running at 4GHz routinely completes jobs faster than my Core2Quad running at 2.4GHz even with a multi-core program.
Let me know if this helps!
JFV

CUDA or FPGA for special purpose 3D graphics computations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am developing a product with heavy 3D graphics computations, to a large extent closest point and range searches. Some hardware optimization would be useful. While I know little about this, my boss (who has no software experience) advocates FPGA (because it can be tailored), while our junior developer advocates GPGPU with CUDA, because its cheap, hot and open. While I feel I lack judgement in this question, I believe CUDA is the way to go also because I am worried about flexibility, our product is still under strong development.
So, rephrasing the question, are there any reasons to go for FPGA at all? Or is there a third option?
I investigated the same question a while back. After chatting to people who have worked on FPGAs, this is what I get:
FPGAs are great for realtime systems, where even 1ms of delay might be too long. This does not apply in your case;
FPGAs can be very fast, espeically for well-defined digital signal processing usages (e.g. radar data) but the good ones are much more expensive and specialised than even professional GPGPUs;
FPGAs are quite cumbersome to programme. Since there is a hardware configuration component to compiling, it could take hours. It seems to be more suited to electronic engineers (who are generally the ones who work on FPGAs) than software developers.
If you can make CUDA work for you, it's probably the best option at the moment. It will certainly be more flexible than a FPGA.
Other options include Brook from ATI, but until something big happens, it is simply not as well adopted as CUDA. After that, there's still all the traditional HPC options (clusters of x86/PowerPC/Cell), but they are all quite expensive.
Hope that helps.
We did some comparison between FPGA and CUDA. One thing where CUDA shines if you can realy formulate your problem in a SIMD fashion AND can access the memory coalesced. If the memory accesses are not coalesced(1) or if you have different control flow in different threads the GPU can lose drastically its performance and the FPGA can outperform it. Another thing is when your operation is realtive small, but you have a huge amount of it. But you cant (e.g. due to synchronisation) no start it in a loop in one kernel, then your invocation times for the GPU kernel exceeds the computation time.
Also the power of the FPGA could be better (depends on your application scenarion, ie. the GPU is only cheaper (in terms of Watts/Flop) when its computing all the time).
Offcourse the FPGA has also some drawbacks: IO can be one (we had here an application were we needed 70 GB/s, no problem for GPU, but to get this amount of data into a FPGA you need for conventional design more pins than available). Another drawback is the time and money. A FPGA is much more expensive than the best GPU and the development times are very high.
(1) Simultanously accesses from different thread to memory have to be to sequential addresses. This is sometimes really hard to achieve.
I would go with CUDA.
I work in image processing and have been trying hardware add-ons for years. First we had i860, then Transputer, then DSP, then the FPGA and direct-compiliation-to-hardware.
What innevitably happened was that by the time the hardware boards were really debugged and reliable and the code had been ported to them - regular CPUs had advanced to beat them, or the hosting machine architecture changed and we couldn't use the old boards, or the makers of the board went bust.
By sticking to something like CUDA you aren't tied to one small specialist maker of FPGA boards. The performence of GPUs is improving faster then CPUs and is funded by the gamers. It's a mainstream technology and so will probably merge with multi-core CPUs in the future and so protect your investment.
FPGAs
What you need:
Learn VHDL/Verilog (and trust me you don't want to)
Buy hw for testing, licences for synthesis tools
If you already have infrastructure and you need to develop only your core
Develop design ( and it can take years )
If you don't:
DMA, hw driver, ultra expensive synthesis tools
tons of knowledge about buses, memory mapping, hw synthesis
build the hw, buy the ip cores
Develop design
Not mentioning of board developement
For example average FPGA pcie card with chip Xilinx ZynqUS+ costs more than 3000$
FPGA cloud is also costly 2$/h+
Result:
This is something which requires resources of running company at least.
GPGPU (CUDA/OpenCL)
You already have hw to test on.
Compare to FPGA stuff:
Everything is well documented .
Everything is cheap
Everything works
Everything is well integrated to programming languages
There is GPU cloud as well.
Result:
You need to just download sdk and you can start.
This is an old thread started in 2008, but it would be good to recount what happened to FPGA programming since then:
1. C to gates in FPGA is the mainstream development for many companies with HUGE time saving vs. Verilog/SystemVerilog HDL. In C to gates System level design is the hard part.
2. OpenCL on FPGA is there for 4+ years including floating point and "cloud" deployment by Microsoft (Asure) and Amazon F1 (Ryft API). With OpenCL system design is relatively easy because of very well defined memory model and API between host and compute devices.
Software folks just need to learn a bit about FPGA architecture to be able to do things that are NOT EVEN POSSIBLE with GPUs and CPUs for the reasons of both being fixed silicon and not having broadband (100Gb+) interfaces to the outside world. Scaling down chip geometry is no longer possible, nor extracting more heat from the single chip package without melting it, so this looks like the end of the road for single package chips. My thesis here is that the future belongs to parallel programming of multi-chip systems, and FPGAs have a great chance to be ahead of the game. Check out http://isfpga.org/ if you have concerns about performance, etc.
FPGA-based solution is likely to be way more expensive than CUDA.
Obviously this is a complex question. The question might also include the cell processor.
And there is probably not a single answer which is correct for other related questions.
In my experience, any implementation done in abstract fashion, i.e. compiled high level language vs. machine level implementation, will inevitably have a performance cost, esp in a complex algorithm implementation. This is true of both FPGA's and processors of any type. An FPGA designed specifically to implement a complex algorithm will perform better than an FPGA whose processing elements are generic, allowing it a degree of programmability from input control registers, data i/o etc.
Another general example where an FPGA can be much higher performance is in cascaded processes where on process outputs become the inputs to another and they cannot be done concurrently. Cascading processes in an FPGA is simple, and can dramatically lower memory I/O requirements while processor memory will be used to effectively cascade two or more processes where there are data dependencies.
The same can be said of a GPU and CPU. Algorithms implemented in C executing on a CPU developed without regard to the inherent performance characteristics of the cache memory or main memory system will not perform as well as one implemented which does. Granted, not considering these performance characteristics simplifies implementation. But at a performance cost.
Having no direct experience with a GPU, but knowing its inherent memory system performance issues, it too will be subjected to performance issues.
CUDA has a fairly substantial code base of examples and a SDK, including a BLAS back-end. Try to find some examples similar to what you are doing, perhaps also looking at the GPU Gems series of books, to gauge how well CUDA will fit your applications. I'd say from a logistic point of view, CUDA is easier to work with and much, much cheaper than any professional FPGA development toolkit.
At one point I did look into CUDA for claim reserve simulation modelling. There is quite a good series of lectures linked off the web-site for learning. On Windows, you need to make sure CUDA is running on a card with no displays as the graphics subsystem has a watchdog timer that will nuke any process running for more than 5 seconds. This does not occur on Linux.
Any mahcine with two PCI-e x16 slots should support this. I used a HP XW9300, which you can pick up off ebay quite cheaply. If you do, make sure it has two CPU's (not one dual-core CPU) as the PCI-e slots live on separate Hypertransport buses and you need two CPU's in the machine to have both buses active.
What are you deploying on? Who is your customer? Without even know the answers to these questions, I would not use an FPGA unless you are building a real-time system and have electrical/computer engineers on your team that have knowledge of hardware description languages such as VHDL and Verilog. There's a lot to it and it takes a different frame of mind than conventional programming.
I'm a CUDA developer with very littel experience with FPGA:s, however I've been trying to find comparisons between the two.
What I've concluded so far:
The GPU has by far higher ( accessible ) peak performance
It has a more favorable FLOP/watt ratio.
It is cheaper
It is developing faster (quite soon you will literally have a "real" TFLOP available).
It is easier to program ( read article on this not personal opinion)
Note that I'm saying real/accessible to distinguish from the numbers you will see in a GPGPU commercial.
BUT the gpu is not more favorable when you need to do random accesses to data. This will hopefully change with the new Nvidia Fermi architecture which has an optional l1/l2 cache.
my 2 cents
Others have given good answers, just wanted to add a different perspective. Here is my survey paper published in ACM Computing Surveys 2015 (its permalink is here), which compares GPU with FPGA and CPU on energy efficiency metric. Most papers report: FPGA is more energy efficient than GPU, which, in turn, is more energy efficient than CPU. Since power budgets are fixed (depending on cooling capability), energy efficiency of FPGA means one can do more computations within same power budget with FPGA, and thus get better performance with FPGA than with GPU. Of course, also account for FPGA limitations, as mentioned by others.
FPGA will not be favoured by those with a software bias as they need to learn an HDL or at least understand systemC.
For those with a hardware bias FPGA will be the first option considered.
In reality a firm grasp of both is required & then an objective decision can be made.
OpenCL is designed to run on both FPGA & GPU, even CUDA can be ported to FPGA.
FPGA & GPU accelerators can be used together
So it's not a case of what is better one or the other. There is also the debate about CUDA vs OpenCL
Again unless you have optimized & benchmarked both to your specific application you can not know with 100% certainty.
Many will simply go with CUDA because of its commercial nature & resources. Others will go with openCL because of its versatility.
FPGAs are more parallel than GPUs, by three orders of magnitude. While good GPU features thousands of cores, FPGA may have millions of programmable gates.
While CUDA cores must do highly similar computations to be productive, FPGA cells are truly independent from each other.
FPGA can be very fast with some groups of tasks and are often used where a millisecond is already seen as a long duration.
GPU core is way more powerful than FPGA cell, and much easier to program. It is a core, can divide and multiply no problem when FPGA cell is only capable of rather simple boolean logic.
As GPU core is a core, it is efficient to program it in C++. Even it it is also possible to program FPGA in C++, it is inefficient (just "productive"). Specialized languages like VDHL or Verilog must be used - they are difficult and challenging to master.
Most of the true and tried instincts of a software engineer are useless with FPGA. You want a for loop with these gates? Which galaxy are you from? You need to change into the mindset of electronics engineer to understand this world.
at latest GTC'13 many HPC people agreed that CUDA is here to stay. FGPA's are cumbersome, CUDA is getting quite more mature supporting Python/C/C++/ARM.. either way, that was a dated question
Programming a GPU in CUDA is definitely easier. If you don't have any experience with programming FPGAs in HDL it will almost surely be too much of a challenge for you, but you can still program them with OpenCL which is kinda similar to CUDA. However, it is harder to implement and probably a lot more expensive than programming GPUs.
Which one is Faster?
GPU runs faster, but FPGA can be more efficient.
GPU has the potential of running at a speed higher than FPGA can ever reach. But only for algorithms that are specially suited for that. If the algorithm is not optimal, the GPU will loose a lot of performance.
FPGA on the other hand runs much slower, but you can implement problem-specific hardware that will be very efficient and get stuff done in less time.
It's kinda like eating your soup with a fork very fast vs. eating it with a spoon more slowly.
Both devices base their performance on parallelization, but each in a slightly different way. If the algorithm can be granulated into a lot of pieces that execute the same operations (keyword: SIMD), the GPU will be faster. If the algorithm can be implemented as a long pipeline, the FPGA will be faster. Also, if you want to use floating point, FPGA will not be very happy with it :)
I have dedicated my whole master's thesis to this topic.
Algorithm Acceleration on FPGA with OpenCL