|
Convergence
- Shared address and message passing are two distinct programming models, but the architectures look very similar
- Both have a communication assist or network interface to initiate messages or transactions
- In shared memory this assist is integrated with the memory controller
- In message passing this assist normally used to be integrated with the I/O, but the trend is changing
- There are message passing machines where the assist sits on the memory bus or machines where DMA over network is supported (direct transfer from source memory to destination memory)
- Finally, it is possible to emulate send/recv. on shared memory through shared buffers and flags
- Possible to emulate a shared virtual mem. on message passing machines through modified page fault handlers
Data parallel arch.
- Array of processing elements (PEs)
- Each PE operates on a data element within a large matrix
- The operation is normally specified by a control processor
- Essentially, single-instruction-multiple-data (SIMD) architectures
- So the parallelism is exposed at the data level
- Processor arrays were outplayed by vector processors in mid-70s
- Vector processors provide a more general framework to operate on large matrices in a controlled fashion
- No need to design a specialized processor array in a certain topology
- Advances in VLSI circuits in mid-80s led to design of large arrays of single-bit PEs
- Also, arbitrary communication (rather than just nearest neighbor) was made possible
- Gradually, this architecture evolved into SPMD (single-program-multiple-data)
- All processors execute the same copy of a program in a more controlled fashion
- But parallelism is expressed by partitioning the data
- Essentially, the same as the way shared memory or message passing machines are used for running parallel applications
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|