CSC 456 Fall 2013/4a bc: Difference between revisions
(→Random) |
|||
Line 19: | Line 19: | ||
====Local Queue==== | ====Local Queue==== | ||
Under local queue management, each processor is responsible for maintaining a sufficient work load. When a load drops below a threshold, the load manager for the processor fires off a request to another random processor work load manager to send work. The remote load manager receiving the request examines it's own work load and, if it has sufficient extra work load, will send work to the requesting load manager. | |||
====Central Queue==== | ====Central Queue==== | ||
A centralized work load manager is responsible for distributing work load to processors under the central queue algorithm. The central manager is aware of all work to be distributed to the processors. When a processor's load falls below a threshold, a request for more work is sent to the central load manager, which then distributes more work. If there is not enough work in the central queue to meet the demand, the request is buffered until there enough work is available to meet the request. | |||
==Real World applications of Load Balancing== | ==Real World applications of Load Balancing== |
Revision as of 15:50, 31 October 2013
Load Balancing
In multi-processor systems, load-balancing is used to break-up and distribute the work load to individual processors in order to make effective use of processor time. When the work load is divided up at compile-time, the balance is said to be statically balanced. Dividing the work load up during run-time is dynamically balancing the load. Static load balancing has reduced overhead as the work is divided before run time. Dynamic load balancing assigns work as processors become idle, so there is greater overhead. However, dynamic balancing can lead to increased performance of load balancing due to being able to assign work to a processor when it does become idle, reducing the overall idle time of processors.
Static Vs. Dynamic Techniques
Static Load balancing
Round Robin
Round robin is a load balancing technique which evenly distributes tasks across available processors. Each processor is lined up, and given a task one after the other until it loops around again back to the first processor. Visualize a dealer in a casino passing out cards to each player in a circle, one at a time. The advantage is that this is a very simple load balancing technique to implement, with very little overhead. A disadvantage is that there is no care given to the job size or performance. This can create problems if a processor is unlucky and is continually assigned large tasks, causing it to fall behind.
Random
Random load balancing relies on the hope that over the course of enough time, workloads are evenly spread by random chance. Random is fairly easy to implement with little overheard. Generating good "random" values is one challenge, because the function is called so many times that any bias will have a large affect. Random suffers from the same drawbacks as round robin though. There is always the chance that a certain processor is randomly picked in an unusually frequent fashion, leading to wait times for other processors. Random could also sparingly assign multiple large tasks to a single processor, which would also lead to uneven load balancing.
Central Manager
Dynamic Load Balancing
Local Queue
Under local queue management, each processor is responsible for maintaining a sufficient work load. When a load drops below a threshold, the load manager for the processor fires off a request to another random processor work load manager to send work. The remote load manager receiving the request examines it's own work load and, if it has sufficient extra work load, will send work to the requesting load manager.
Central Queue
A centralized work load manager is responsible for distributing work load to processors under the central queue algorithm. The central manager is aware of all work to be distributed to the processors. When a processor's load falls below a threshold, a request for more work is sent to the central load manager, which then distributes more work. If there is not enough work in the central queue to meet the demand, the request is buffered until there enough work is available to meet the request.
Real World applications of Load Balancing
Examples of Load Balancing in action
Server Load balancing pseudocode
server_load_vec_desc = sort_descending(server_load_vec);
server_load_vec_asc = sort_ascending(server_load_vec);
while (server_load_vec_desc[0].deviation > DEVIATION_THRESHOLD) {
populate_range_load_vector(server_load_vec_desc[0].server_name);
sort descending range_load_vec;
i=0;
while (server_load_vec_desc[0].deviation > DEVIATION_THRESHOLD &&
i < range_load_vec.size()) {
if (moving range_load_vec[i] from server_load_vec_desc[0] to server_load_vec_asc[0] reduces deviation) {
add range_load_vec[i] to balance plan
partial_deviation = range_load_vec[i].loadestimate * loadavg_per_loadestimate;
server_load_vec_desc[0].loadavg -= partial_deviation;
server_load_vec_desc[0].deviation -= partial_deviation;
server_load_vec_asc[0].loadavg += partial_deviation;
server_load_vec_asc[0].deviation += partial_deviation;
server_load_vec_asc = sort_ascending(server_load_vec_asc);
}
i++;
}
if (i == range_load_vec.size())
remove server_load_vec_desc[0] and corresponding entry in server_load_vec_asc
server_load_vec_desc = sort_descending(server_load_vec_desc);
}