Skip to main content

Amhdal's Law

Speedup of Parallel Execution

info

Definition: Speed up Speed up is defined as:

time of serial executiontime of parallel execution\frac{time\ of\ serial\ execution}{time\ of\ parallel\ execution}

Note that the time of serial execution should be the time for serial execution of best serial algorithm because bad programs often have good speedup. Pasted image 20240819152523

Suppose that a program is composed of a serial phase and a parallel phase, the whole program runs for 11 time unit, the serial phase runs for time ss, and the parallel phase for time 1s1-s.

Amdahl's Law

note

Amdahl's law Regardless how many processors NN are used, the execution time of the program will be at least ss, and the speed up will be no more than 1s\frac{1}{s}.

Again, suppose a machine with NN processors, the serial phase takes time 11 and the parallel phase takes time pp if run on a single processor. Therefore:

  • For the parallel phase, it takes pN\frac{p}{N} on this machine.
  • For the serial phase, it only runs on a single processor and it takes time 11. Let α\alpha be the ratio of serial time to total execution time:
α=11+pN=NN+p\alpha = \frac{1}{1+\frac{p}{N}} = \frac{N}{N+p}

We can know that limnα=1\lim_{ n \to \infty } \alpha = 1. The conclusion here is that if you have unlimited number of processors, the time consumption of a parallel program could be theoretically 00.

warning

Adding cores(processors) causes overheads, as the processors need to communicate to switch information with each other, or they need to do some synchronization. Therefore, the actual calculation on speed up should consider the overhead term o(P)o(P) caused by using PP processors. Pasted image 20240821151610 The overhead term does not benefits from parallel execution, so it is considered as a constant. The overhead term sometimes becomes significant when it is growing with the growth of number of processors dramatically. Refer to the green line in the figure.

info

Definition Efficiency is defined as:

e=speedupnumber of processorse = \frac{speedup}{number\ of\ processors}

However, considering efficiency ee, limne=0\lim_{ n \to \infty } e = 0, which means that adding more processors brings the consequence of dropping efficiency, you are wasting processors and the efficiency is very lower. But the truth is, as NN increases, pp increase too, and the fraction of time 1α1-\alpha does not necessarily shrink with increasing NN, and efficiency ee remains reasonable.

Gustafson's Law (scaled speedup)

Another law that measures speedup Pasted image 20240820111135

note

Amdahl’s Law is pessimistic, focusing on the inherent limitations due to the non-parallelizable portion of the workload. Gustafson’s Law is more optimistic, suggesting that increasing the problem size with more processors allows for greater speedup. Amdahl’s Law is more applicable when the problem size is fixed. Gustafson’s Law applies better when the problem size can scale with the number of processors.

How much bigger can the problem scale that it takes same time for a specific number of processors to solve the problem? Pasted image 20240820111758