I have an optimal resource allocation problem:
Let us say that I have a set of steps that execute one after the other (strictly in pre-defined order). Each step consumes a fixed amount of memory and cpu capacity for a pre-specified duration. I also have a infinite set of machines to deploy and run this code in. (each step is an independently deployable component). Each machine specifies its max CPU and memory capacity.
Given a throughput rate (rate at which the first task is invoked), I want to be able to provide the ideal deployment strategy. How to go about this?
This is what I can decipher from the problem Statement, trying to rephrase it :
Given a Graph G which has a pre-order in which the steps must execute( say S1 > S2 > S3...Sk).
Each of this steps has fixed CPU %age (Ci) they consume and fixed time(Ti) they will take for execution
The Instances of this graph are created at a fixed throughput of t tps/sec (i.e if t =100, 100 instances of this graph are created per second).
We need to allocate the resources to these instances in such a way that all the resources are optimally and fully utilized. (i.e the time delay must be minimized for catering/allocating resource to any request).
Related
Suppose that I need to design a web service. To keep it simple, assume that I use LAMP (Linux-Apache-MySQL-PHP).
I know that I will serve exactly N user requests per second. The requests are basically simple CRUD operations to the database, no file uploads or complex calculations.
Suppose that each request executes M ms and takes K Mb of memory on my server, having G Gb of RAM.
How many such servers do I need? Is it just N * K / G?
The reasonable value for M is 200ms. What is the reasonable value for K?
Do we need to take CPU power into account in this question?
Any additional considerations?
What you're doing is a good back of the envelope approximation but by no means should you use your thought exercise as a definitive guide for scaling your service.
That is because no service will exhibit that type of constant behavior as you describe (blame if on unpredictable peripheral i/o, garbage collection, external factors, user input, etc)
The correct approach is to perform scale and load testing. After you've written your service, start to load test your service and note the performance characteristics of your service. If you do things right you should reach a point where your configuration maxes out: either the CPU, the network throughput, the memory, or disk I/O. If neither are maxed out and you hit a limit then it's one of your upstream dependencies (your database etc.)
Once you've reached your peak it will tell you how many requests per second you can handle at peak.
You will also notice that in most cases peak performance is not sustainable: your setup may be able to burst for short periods of time handling many more requests per second than under sustained load.
After you get the numbers for a single server, you can start to vary in two ways:
test with different hardware configurations (add more RAM if you're memory bound, add a better CPU if you're CPU bound, etc)
test with multiple servers; start adding servers and see how your service scales horizontally
Ideally your service should scale linearly as you add servers but you will likely find that the performance curve is not linear.
Get your numbers, tweak your design. Rinse. Repeat.
There is no substitute, magic formula.
I have two questions:
Is it better to make a kernel overwork or underwork? Let's say I want to calculate a difference image with only 4 GPU cores. Should I consider any pixel of my image to be calculated independently by 1 thread or Should I make 1 thread calculate a whole line of my image? I dont know which solution is the most optimized to use. I already vectorized the first option (which was impelmented) but I only gain some ms, it is not very significative.
My second question is about the execution costs of a kernel. I know how to measure any OpenCL command queue task (copy, write, read, kernel...) but I think there is a time taken by the host to load the kernel to the GPU cores. Is there any way to evaluate it?
Baptiste
(1)
Typically you'd process a single item in a kernel. If you process multiple items, you need to do them in the right order to ensure coalesced memory access or you'll be slower than doing a single item (the solution to this is to process a column per work item instead of a row).
Another reason why working on multiple items can be slower is that you might leave compute units idle. For example, if you process scanlines on a 1000x1000 image with 700 compute units, the work will be chunked into 700 work items and then only 300 work items (leaving 400 idle).
A case where you want to do lots of work in a single kernel is if you're using shared local memory. For example, if you load a look-up table (LUT) into SLM, you should use it for an entire scanline or image.
(2)
I'm sure this is a non-zero amount of time but it is negligible. Kernel code is pretty small. The driver handles moving it to the GPU, and also handles pushing parameter data onto the GPU. Both are very fast, and likely happen while other kernels are running, so are "free".
I have a quad core computer; and I use the parallel computing toolbox.
I set different number for the "worker" number in the parallel computing setting, for example 2,4,8..............
However, no matter what I set, the AVERAGE cpu usage by MATLAB is exactly 25% of total CPU usage; and None of the cores run at 100% (All are around 10%-30%). I am using MATLAB to run optimization problem, so I really want my quad core computer using all its power to do the computing. Please help
Setting a number of workers (up to 4 on a quad-core) is not enough. You also need to use a command like parfor to signal to Matlab what part of the calculation should be distributed among the workers.
I am curious about what kind of optimization you're running. Normally, optimization problems are very difficult to parallelize, since the result of every iteration depends on the previous one. However, if you want to e.g. try and fit multiple models to the data, or if you have to fit multiple data sets, then you can easily run these in parallel as opposed to sequentially.
Note that having many cores may not be sufficient in terms of resources - if performing the optimization on one worker uses k GB of RAM, performing it on n workers requires at least n*k GB of RAM.
I'm writing a program using JOGL/openCL to utilize the GPU. I have code that kicks in when we work with data sizes which is suppose to detect the available memory on the GPU. If there is insufficient memory on the GPU to process the entire calculation at once it will break the process up into sub process with X number of frames which utilizes less then the max GPU global memory to store.
I had expected that using the maximum possible value of X would give me the largest speed up by minimizing the number of kernels used. Instead I found using a smaller group (X/2 or X/4) gives me better speeds. I'm trying to figure out why breaking the GPU processing into smaller groups rather then having the GPU process the maximum amount it can handle at one time gives me a speed increase; and how I can optimize to figure out what the best value of X is.
My current tests have been running on a GPU kernel which uses very little processing power (both kernels decimate output by selecting part of input and returning it) However, I am fairly certain the same effects occur when I activate all kernels which do a larger degree of processing on the value before returning.
The short answer is, it's complicated. There are many factors at play. These include (but are not limited to):
Amount of local memory you are using.
Amount of private memory you are using.
A limit on the max number of work groups the Symmetric Multiprocessor is able to handle at once.
Exceeding register limits, causing memory access slow-down.
And many more...
I recommend you check out the following link:
http://courses.engr.illinois.edu/ece498/al/textbook/Chapter5-CudaPerformance.pdf
In particular, check out section 5.3. Dynamic Partitioning of SM Resources. This text is meant to be general purpose, but uses CUDA for its examples. However, the concepts still apply just the same to OpenCL.
This text comes from the following book:
http://www.amazon.com/Programming-Massively-Parallel-Processors-Hands-/dp/0123814723/ref=sr_1_2?ie=UTF8&qid=1314279939&sr=8-2
For what its worth, I found this book to be very informative. It will give you a deeper understanding of the hardware that will allow you to answer questions like this.
PCI-e are full duplex bi-directional. i think that means you can write as you read. in which case, if you're doing very little processing, you may be seeing a gain because you're overlappings reads with writes.
consider a total size of N. in one work unit you do:
write N
process N
read N
total time proportional to: process N, transfer 2N
if you split this in two with parallel read/write you can get:
write N/2
process N/2
read N/2 and write N/2
process N/2
read N/2
total time proportional to: process N, transfer 3N/2 (saving N/2 transfer time)
At work we perform demanding numerical computations.
We have a network of several Linux boxes with different processing capabilities. At any given time, there can be anywhere from zero to dozens of people connected to a given box.
I created a script to measure the MFLOPS (Million of Floating Point Operations per Second) using the Linpack Benchmark; it also provides number of cores and memory.
I would like to use this information together with the load average (obtained using the uptime command) to suggest the best computer for performing a demanding computation. In other words, its 3:00pm; I have a meeting in two hours; I need to run a demanding process: what node will get me the answer fastest?
I envision a script which will output a suggestion along the lines of:
SUGGESTED HOSTS (IN ORDER OF PREFERENCE)
HOST1.MYNETWORK
HOST2.MYNETWORK
HOST3.MYNETWORK
Such suggestion should favor fast computers (high MFLOPS) if the load average is low and, as load average increases for a given node, it should favor available nodes instead (i.e., I'd rather run in a slower computer with no users than in an eight-core with forty dudes logged in).
How should I prioritize? What algorithm (rationale) would you use? Again, what I have is:
Load Average (1min, 5min, 15min)
MFLOPS measure
Number of users logged in
RAM (installed and available)
Number of cores (important to normalize the load average)
Any thoughts? Thanks!
You don't have enough data to make an well-informed decision. It sounds as though the scheduling is very volatile: "At any given time, there can be anywhere from zero to dozens of people connected to a given box." So the current load does not necessarily reflect the future load of the machines.
To properly asses what hosts someone should use to minimize computation time would require knowing when the current jobs will terminate. If a powerful machine is about to be done doing most of its jobs, it would be a good candidate even though it currently has a high load.
If you want to guess purely on the current situation, you can do a weighed calculation to find out which hosts have the most MFLOPS available.
MFLOPS available = host's MFLOPS + (number of logical processors - load average)
Sort the hosts by MFLOPS available and suggest them in a descending order.
This formula assumes that the MFLOPS of a host is linearly related to its load average. This might not be exactly true, but it's probably fairly close.
I would favor the most recent load average since it's closer to the current/future situation, whereas, jobs from 15 minutes ago might have completed by now.
Have you considered a distributed approach to computation? Not all computations can be broken up such that more than one cpu can work on them. But perhaps your problem space can benefit from some parallelization. Have a look at Hadoop.
You don't need to know FLOPS. beowulf modules paralell computing center has I go to has the script for sure
PDC operates leading-edge, high-performance computers on a national level. PDC offers easily accessible computational resources that primarily cater to the ...