|
Message passing
- Very popular for large-scale computing
- The system architecture looks exactly same as DSM, but there is no shared memory
- The user interface is via send/receive calls to the message layer
- The message layer is integrated to the I/O system instead of the memory system
- Send specifies a local data buffer that needs to be transmitted; send also specifies a tag
- A matching receive at dest. node with the same tag reads in the data from kernel space buffer to user memory
- Effectively, provides a memory-to-memory copy
- Actual implementation of message layer
- Initially it was very topology dependent
- A node could talk only to its neighbors through FIFO buffers
- These buffers were small in size and therefore while sending a message send would occasionally block waiting for the receive to start reading the buffer (synchronous message passing)
- Soon the FIFO buffers got replaced by DMA (direct memory access) transfers so that a send can initiate a transfer from memory to I/O buffers and finish immediately (DMA happens in background); same applies to the receiving end also
- The parallel algorithms were designed specifically for certain topologies: a big problem
- To improve usability of machines, the message layer started providing support for arbitrary source and destination (not just nearest neighbors)
- Essentially involved storing a message in intermediate “hops” and forwarding it to the next node on the route
- Later this store-and-forward routing got moved to hardware where a switch could handle all the routing activities
- Further improved to do pipelined wormhole routing so that the time taken to traverse the intermediate hops became small compared to the time it takes to push the message from processor to network (limited by node-to-network bandwidth)
- Examples include IBM SP2, Intel Paragon
- Each node of Paragon had two i860 processors, one of which was dedicated to servicing the network (send/recv. etc.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|