| |
Thread-level Parallelism
- Look for concurrency at a granularity coarser than instructions
- Put a chunk of consecutive instructions together and call it a thread (largely wrong!)
- Each thread can be seen as a “dynamic” subgraph of the sequential control-flow graph: take a loop and unroll its graph
- The edges spanning the subgraphs represent data dependence across threads
(the control dependence edges are usually converted to data dependence edges through suitable transformations)
- The goal of parallelization is to minimize such edges
- Threads should mostly compute independently on different cores; but need to talk once in a while to get things done!
- Parallelizing sequential programs is fun, but often tedious for non-experts
- So look for parallelism at even coarser grain
- Run multiple independent programs simultaneously
- Known as multi-programming
- The biggest reason why quotidian Windows fans would buy small-scale multiprocessors and multi-core today
- Can play games while running heavy-weight simulations and downloading movies
- Have you seen the state of the poor machine when running anti-virus?
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
|
|