Related
I'm writing some code for a piece of network middleware. Right now, our code is running too slowly.
We've already done one round of rewrites and optimizations, but we seem to be running into hard limitations on what can be done in software.
The slowness of the code stems from one subroutine - an esoteric sampling algorithm from computational statistics. Because the math involved is somewhat similar to the stuff done in DSP, I'm wondering if we can use an FPGA to accelerate the computation.
My question is basically in the title - how do I tell whether an FPGA (or even ASIC) will give a useful speedup in my use case?
EDIT: A 'useful' speedup is one that is significant enough to justify the cost and dev time required to build the FPGA.
The short answer is to have an experienced FPGA engineer look at the algorithm and tell you what it would take in development time and bill of material cost for a solution.
Without knowing the details of your algorithm it is harder to guess. How parallelizable is the problem? Or can it be heavily pipelined? How many multiply/accumulate/dsp operations are required? Can you approximate some calculations with a big look up table or other FPGA dsp techniques (CORDIC). FPGAs can do many, many more of these operations in parallel (every clock cycle) 100s or even 1000s depending on how much you are willing to spend on an FPGA. Without knowing the details and having an experienced FPGA/DSP engineer look at the problem it is going to be hard to get a real feel though.
Some other options are:
look into DSP chips (TI dsp chips are one example).
Does your processor have SIMD instructions available that are not being used?
I'm starting on my first commercial sized application, and I often find myself making a design, but stopping myself from coding and implementing it, because it seems like a huge use of resources. This is especially true when it's on a piece that is peripheral (for example an enable for the output taps of a shift register). It gets even worse when I think about how large the generic implementation can get (4k bits for the taps example). The cleanest implementation would have these, but in my head it adds a great amount of overhead.
Is there any kind of rule I can use to make a quick decision on whether a design option is worth coding and evaluation? In general I worry less about the number of flip-flops, and more when it comes to width of signals. This may just be coming from a CS background where all application boundarys should be as small as possibly feasable to prevent overhead.
Point 1. We learn by playing, so play! Try a couple of things. See what the tools do. Get a feel for the problem. You won't get past this is you don't try something. Often the problems aren't where you think they're going to be.
Point 2. You need to get some context for these decisions. How big is adding an enable to a shift register compared to the capacity of the FPGA / your design?
Point 3. There's two major types of 'resource' to consider :- Cells and Time.
Cells is relatively easy in broad terms. How many flops? How much logic in identifiable blocks (e.g. in an ALU: multipliers, adders, etc)? Often this is defined by the design you're trying to do. You can't build an ALU without registers, a multiplier, an adder, etc.
Time is more subtle, and is invariably traded off against cells. You'll be trying to hit some performance target and recognising the structures that will make that hard are where to experience from point 1 comes in.
Things to look out for include:
A single net driving a large number of things. Large fan-outs cause a heavy load on a single driver which slows it down. The tool will then have to use cells to buffer that signal. Classic time vs cells trade off.
Deep clumps of logic between register stages. Again the tool will have to spend more cells to make logic meet timing if it's close to the edge. Simple logic is fast and small. Sometimes introducing a pipeline stage can decrease the size of a design is it makes the logic either side far easier.
Don't worry so much about large buses, if each bit is low fanout and you've budgeted for the registers. Large buses are often inherent in fast designs because you need high bandwidth. It can be easier to go wide than to go to a higher clock speed. On the other hand, think about the control logic for a wide bus, because it's likely to have a large fan-out.
Different tools and target devices have different characteristics, so you have to play and learn the rules for your set-up. There's always a size vs speed (and these days 'vs power') compromise. You need to understand what moves you along that curve in each direction. That comes with experience.
Is there any kind of rule I can use to make a quick decision on whether a design option is worth coding and evaluation?
Only rule I can come up with is 'Have I got time? or not?'
If I have, I'll explore. If not I better just make something work.
Ahhh, the life of doing design to a deadline!
It's something that comes with experience. Here's some pointers:
adding numbers is fairly cheap
choosing between them (multiplexing) gets big quite quickly if you have a lot of inputs to the multiplexer (the width of each input is a secondary issue also).
Multiplications are free if you have spare multipliers in your chip, they suddenly become expensive when you run out of hard DSP blocks.
memory is also cheap, until you run out. For example, your 4Kbit shift register easily fits within a single Xilinx block RAM, which is fine if you have one to spare. If not it'll take a large number of LUTs (depending on the device - an older Spartan 3 can fit 17 bits into a LUT (including the in-CLB register), so will require ~235 LUTS). And not all LUTs can be shift registers. If you are only worried about the enable for the register, don't. Unless you are pushing the performance of the device, routing that sort of signal to a few hundred LUTs is unlikely to cause major timing issues.
I'm working on a game where the physics are very simple. I just need to detect when a ball (point) hits a wall (line segment). There is no gravity, no friction, and the collisions are perfectly elastic.
I've already written collision detection code, but I'm about to make some major changes to the project, so there's an opportunity to replace it all with the Chipmunk physics library. Is this a good idea?
On the one hand, Chipmunk will be more heavily tested and optimized than my own code, and I won't have to do the work of maintaining it.
On the other, maybe Chipmunk will be less performant in my case, since it was designed to support a lot of features I won't be using.
I'm hoping someone more familiar with Chipmunk will spare me the effort of profiling it or reading the code myself to make this determination.
The only real advantage Chipmunk would have here is if you are colliding that ball (or many balls) against many walls as it uses a spatial index to only check the collisions of objects that are near each other. This means that you can scale up to hundreds or thousands of objects without slowing to a crawl, but offers no real advantage if you only have a dozen objects in the scene.
It sounds like what you've implemented so far works just fine for your needs. "If it's not broke don't fix it" is a good rule of thumb here. On the other hand, it would be really easy to implement the same thing in Chipmunk. If you want the experience and the possibility of scalability in return for the hassle of a dependency, go for it I guess.
Scott (the Chipmunk Physics guy)
I've found Chipmunk to be incrediblly easy to use, I would recommend it to anyone starting a 2D project. I can't answer the performance question without knowing your code. I know it uses a spatial hash for collision determination, it may end up doing less collision tests than your code. (On the other hand if there are a very small number of balls and walls, this may not be an issue).
It is open source, so another possibility would be to use Chipmunk, but remove all the features you don't need - gravity, friction, moments of inertia, etc. Again, it's hard to say this is a good option without knowing exactly what you have already implemented.
It really comes down to what you want it to do. I have not used chipmunk itself, but from what it sounds like I'd say you aren't really in need of a full physics library.
Now, if you have plans to expand it beyond a ball and a wall such that you would have use for the expanded physics, then learning it now on a simple problem now may be a good idea. Overall though, unless you either want to learn the physics library or plan on raising the complexity/number of types of physics calculations, I'd say just do it yourself.
I'm a CS master student, and next semester I will have to start working on my thesis. I've had trouble coming up with a thesis idea, but I decided it will be related to Computer Graphics as I'm passionate about game development and wish to work as a professional game programmer one day.
Unfortunately I'm kinda new to the field of 3D Computer Graphics, I took an undergraduate course on the subject and hope to take an advanced course next semester, and I'm already reading a variety of books and articles to learn more. Still, my supervisor thinks its better if I come up with a general thesis idea now and then spend time learning about it in preparation for doing my thesis proposal. My supervisor has supplied me with some good ideas but I'd rather do something more interesting on my own, which hopefully has to do with games and gives me more opportunities to learn more about the field. I don't care if it's already been done, for me the thesis is more of an opportunity to learn about things in depth and to do substantial work on my own.
I don't know much about GPU programming and I'm still learning about shaders and languages like CUDA. One idea I had is to program an entire game (or as much as possible) on the GPU, including all the game logic, AI, and tests. This is inspired by reading papers on GPGPU and questions like this one I don't know how feasible that is with my knowledge, and my supervisor doesn't know a lot about recent GPUs. I'm sure with time I will be able to answer this question on my own, but it'd be handy if I could know the answer in advance so I could also consider other ideas.
So, if you've got this far, my question: Using only shaders or something like CUDA, can you make a full, simple 3D game that exploits the raw power and parallelism of GPUs? Or am I missing some limitation or difference between GPUs and CPUs that will always make a large portion of my code bound to CPU? I've read about physics engines running on the GPU, so why not everything else?
DISCLAIMER: I've done a PhD, but have never supervised a student of my own, so take all of what I'm about to say with a grain of salt!
I think trying to force as much of a game as possible onto a GPU is a great way to start off your project, but eventually the point of your work should be: "There's this thing that's an important part of many games, but in it's present state doesn't fit well on a GPU: here is how I modified it so it would fit well".
For instance, fortran mentioned that AI algorithms are a problem because they tend to rely on recursion. True, but, this is not necessarily a deal-breaker: the art of converting recursive algorithms into an iterative form is looked upon favorably by the academic community, and would form a nice center-piece for your thesis.
However, as a masters student, you haven't got much time so you would really need to identify the kernel of interest very quickly. I would not bother trying to get the whole game to actually fit onto the GPU as part of the outcome of your masters: I would treat it as an exercise just to see which part won't fit, and then focus on that part alone.
But be careful with your choice of supervisor. If your supervisor doesn't have any relevant experience, you should pick someone else who does.
I'm still waiting for a Gameboy Emulator that runs entirely on the GPU, which is just fed the game ROM itself and current user input and results in a texture displaying the game - maybe a second texture for sound output :)
The main problem is that you can't access persistent storage, user input or audio output from a GPU. These parts have to be on the CPU, by definition (even though cards with HDMI have audio output, but I think you can't control it from the GPU). Apart from that, you can already push large parts of the game code into the GPU, but I think it's not enough for a 3D game, since someone has to feed the 3D data into the GPU and tell it which shaders should apply to which part. You can't really randomly access data on the GPU or run arbitrary code, someone has to do the setup.
Some time ago, you would just setup a texture with the source data, a render target for the result data, and a pixel shader that would do the transformation. Then you rendered a quad with the shader to the render target, which would perform the calculations, and then read the texture back (or use it for further rendering). Today, things have been made simpler by the fourth and fifth generation of shaders (Shader Model 4.0 and whatever is in DirectX 11), so you can have larger shaders and access memory more easily. But still they have to be setup from the outside, and I don't know how things are today regarding keeping data between frames. In worst case, the CPU has to read back from the GPU and push again to retain game data, which is always a slow thing to do. But if you can really get to a point where a single generic setup/rendering cycle would be sufficient for your game to run, you could say that the game runs on the GPU. The code would be quite different from normal game code, though. Most of the performance of GPUs comes from the fact that they execute the same program in hundreds or even thousands of parallel shading units, and you can't just write a shader that can draw an image to a certain position. A pixel shader always runs, by definition, on one pixel, and the other shaders can do things on arbitrary coordinates, but they don't deal with pixels. It won't be easy, I guess.
I'd suggest just trying out the points I said. The most important is retaining state between frames, in my opinion, because if you can't retain all data, all is impossible.
First, Im not a computer engineer so my assumptions cannot even be a grain of salt, maybe nano scale.
Artificial intelligence? No problem.There are countless neural network examples running in parallel in google. Example: http://www.heatonresearch.com/encog
Pathfinding? You just try some parallel pathfinding algorithms that are already on internet. Just one of them: https://graphics.tudelft.nl/Publications-new/2012/BB12a/BB12a.pdf
Drawing? Use interoperability of dx or gl with cuda or cl so drawing doesnt cross pci-e lane. Can even do raytracing at corners so no z-fighting anymore, even going pure raytraced screen is doable with mainstream gpu using a low depth limit.
Physics? The easiest part, just iterate a simple Euler or Verlet integration and frequently stability checks if order of error is big.
Map/terrain generation? You just need a Mersenne-twister and a triangulator.
Save game? Sure, you can compress the data parallelly before writing to a buffer. Then a scheduler writes that data piece by piece to HDD through DMA so no lag.
Recursion? Write your own stack algorithm using main vram, not local memory so other kernels can run in wavefronts and GPU occupation is better.
Too much integer needed? You can cast to a float then do 50-100 calcs using all cores then cast the result back to integer.
Too much branching? Compute both cases if they are simple, so every core is in line and finish in sync. If not, then you can just put a branch predictor of yourself so the next time, it predicts better than the hardware(could it be?) with your own genuine algorithm.
Too much memory needed? You can add another GPU to system and open DMA channel or a CF/SLI for faster communication.
Hardest part in my opinion is the object oriented design since it is very weird and hardware dependent to build pseudo objects in gpu. Objects should be represented in host(cpu) memory but they must be separated over many arrays in gpu to be efficient. Example objects in host memory: orc1xy_orc2xy_orc3xy. Example objects in gpu memory: orc1_x__orc2_x__ ... orc1_y__orc2_y__ ...
The answer has already been chosen 6 years ago but for those interested to the actual question, Shadertoy, a live-coding WebGL platform, recently added the "multipass" feature allowing preservation of state.
Here's a live demo of the Bricks game running on Gpu.
I don't care if it's already been
done, for me the thesis is more of an
opportunity to learn about things in
depth and to do substantial work on my
own.
Then your idea of what a thesis is is completely wrong. A thesis must be an original research. --> edit: I was thinking about a PhD thesis, not a master thesis ^_^
About your question, the GPU's instruction sets and capabilities are very specific to vector floating point operations. The game logic usually does little floating point, and much logic (branches and decision trees).
If you take a look to the CUDA wikipedia page you will see:
It uses a recursion-free,
function-pointer-free subset of the C
language
So forget about implementing any AI algorithms there, that are essentially recursive (like A* for pathfinding). Maybe you could simulate the recursion with stacks, but if it's not allowed explicitly it should be for a reason. Not having function pointers also limits somewhat the ability to use dispatch tables for handling the different actions depending on state of the game (you could use again chained if-else constructions, but something smells bad there).
Those limitations in the language reflect that the underlying HW is mostly thought to do streaming processing tasks. Of course there are workarounds (stacks, chained if-else), and you could theoretically implement almost any algorithm there, but they will probably make the performance suck a lot.
The other point is about handling the IO, as already mentioned up there, this is a task for the main CPU (because it is the one that executes the OS).
It is viable to do a masters thesis on a subject and with tools that you are, when you begin, unfamiliar. However, its a big chance to take!
Of course a masters thesis should be fun. But ultimately, its imperative that you pass with distinction and that might mean tackling a difficult subject that you have already mastered.
Equally important is your supervisor. Its imperative that you tackle some problem they show an interest in - that they are themselves familiar with - so that they can become interested in helping you get a great grade.
You've had lots of hobby time for scratching itches, you'll have lots more hobby time in the future too no doubt. But master thesis time is not the time for hobbies unfortunately.
Whilst GPUs today have got some immense computational power, they are, regardless of things like CUDA and OpenCL limited to a restricted set of uses, whereas the CPU is more suited towards computing general things, with extensions like SSE to speed up specific common tasks. If I'm not mistaken, some GPUs have the inability to do a division of two floating point integers in hardware. Certainly things have improved greatly compared to 5 years ago.
It'd be impossible to develop a game to run entirely in a GPU - it would need the CPU at some stage to execute something, however making a GPU perform more than just the graphics (and physics even) of a game would certainly be interesting, with the catch that game developers for PC have the biggest issue of having to contend with a variety of machine specification, and thus have to restrict themselves to incorporating backwards compatibility, complicating things. The architecture of a system will be a crucial issue - for example the Playstation 3 has the ability to do multi gigabytes a second of throughput between the CPU and RAM, GPU and Video RAM, however the CPU accessing GPU memory peaks out just past 12MiB/s.
The approach you may be looking for is called "GPGPU" for "General Purpose GPU". Good starting points may be:
http://en.wikipedia.org/wiki/GPGPU
http://gpgpu.org/
Rumors about spectacular successes in this approach have been around for a few years now, but I suspect that this will become everyday practice in a few years (unless CPU architectures change a lot, and make it obsolete).
The key here is parallelism: if you have a problem where you need a large number of parallel processing units. Thus, maybe neural networks or genetic algorithms may be a good range of problems to attack with the power of a GPU. Maybe also looking for vulnerabilities in cryptographic hashes (cracking the DES on a GPU would make a nice thesis, I imagine :)). But problems requiring high-speed serial processing don't seem so much suited for the GPU. So emulating a GameBoy may be out of scope. (But emulating a cluster of low-power machines might be considered.)
I would think a project dealing with a game architecture that targets multiple core CPUs and GPUs would be interesting. I think this is still an area where a lot of work is being done. In order to take advantage of current and future computer hardware, new game architectures are going to be needed. I went to GDC 2008 and there were ome talks related to this. Gamebryo had an interesting approach where they create threads for processing computations. You can designate the number of cores you want to use so that if you don't starve out other libraries that might be multi-core. I imagine the computations could be targeted to GPUs as well.
Other approaches included targeting different systems for different cores so that computations could be done in parallel. For instance, the first split a talk suggested was to put the renderer on its own core and the rest of the game on another. There are other more complex techniques but it all basically boils down to how do you get the data around to the different cores.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm curious about a certain job title, that of "senior developer with a specialty in optimisation." It's not the actual title but that's essentially what it would be. What would this mean in the gaming industry in terms of knowledge and skills? I would assume basic stuff like
B-trees
Path finding
Algorithmic analysis
Memory management
Threading (and related topics like thread safety, atomicity, etc)
But this is only me conjecturing. What would be the real-life (and academic) basic knowledge required for such a job?
I interviewed for such a position a few years ago at one of the Big North American game studios.
The job required a lot of deep pipeline assembly programming, arithmetic optimization algorithms (think Duff's Device, branchless ifs), compile-time computation (SWAR), meta-template programming, computation of many values at once in parallel in very large registers (I forget the name for that)... You'll need to be solid on operating system fundementals, low level system operations, linear algebra, and C++ especially templates. You'll also become very familiar with the peculiar architecture of the PlayStation3, and probably be involved in developing libraries for that environment that the company's game teams will build on top of.
Generally I concur with Ether's post; this will typically be more about low-level optimisation than algorithmic stuff. Knowing good algorithms comes in handy, but there are many cases in games where you prefer the O(N) solution over the O(logN) solution because the first is far friendlier on the cache and requires less memory management. So you need a more holistic knowledge.
Perhaps on a more general level, the job may want to know if you can do some or all of the following:
use a CPU profiler (eg. VTune, CodeAnalyst) in both sampling and call graph mode;
use graphical profilers (eg. Microsoft Pix, NVPerfHud)
write your own profiling/timer code and generate useful output with it;
rewrite functions to remove dynamic memory allocations;
reorganise and reduce data to be more cache-friendly;
reorganise data to make it more SIMD-friendly;
edit graphics shaders to use fewer and cheaper instructions;
...and more, I'm sure.
This is a lot like my job actually. Real-life knowledge that would be practical for this:
Experience in using profilers of all kinds to locate bottlenecks.
Experience and skill in determining the reason those bottlenecks exist.
Good understanding of CPU caches, virtual memory, and common bottlenecks such as load-hit-store penalties, L2 misses, floating point code, etc.
Good understanding of multithreading and both lockless and locking solutions.
Good understanding of HLSL and graphics programming, including linear algebra.
Good understanding of SIMD techniques and the specific SIMD interfaces on relevant hardware (paired singles, VMX, SSE/MMX).
Good understanding of the assembly language used on relevant hardware. If writing assembly, a good understanding of instruction pairing, branch prediction, delay slots (if applicable), and any and all applicable stalls on the target platform.
Good understanding of the compilation and linking process, binary formats used on the target hardware, and tools to manipulate all of the above (including available compiler flags and optimizations).
Every once in a while people ask how to become good at low-level optimization. There are a few good sources of info, mostly proprietary, but I think it generally comes down to experience.
This is one of those "if you got it you know it" type of things. It's hard to list out specifics, and some studios will have different criteria than others.
To put it simply, the 'Senior Developer' part means you've been around the block; you have multiple years of experience in which you've excelled and have shipped games. You should have a working knowledge of a wide range of topics, with things such as memory management high up the list.
"Specialty in Optimization" essentially means that you know how to make a game run faster. You've already spent a significant amount of time successfully optimizing games which have shipped. You should have a wide knowledge of algorithms, 3d rendering (a lot of time is spent rendering), cpu intrinsics, memory management, and others. You should also typically have an in depth knowledge of the hardware you'd be working on (optimizing PS3 can be substantially different than optimizing for PC).
This is at best a starting point for understanding. The key is having significant real world experience in the topic; at a senior level it should preferably be from working on titles that have shipped.