|
Throughput metrics
- Sometimes metrics like jobs/hour may be more important than just the turn-around time of a job
- This is the case for transaction processing (the biggest commercial application for servers)
- Needs to serve as many transactions as possible in a given time provided time per transaction is reasonable
- Transactions are largely independent; so throw in as many hardware threads as possible
- Known as throughput computing
Application trends
- Equal to or below 1 GFLOPS requirements
- 2D airfoil, oil reservoir modeling, 3D plasma modeling, 48-hour weather
- Below 100 GFLOPS requirements
- Chemical dynamics, structural biology, 72-hour weather
- Tomorrow’s applications (beyond 100 GFLOPS)
- Human genome, protein folding, superconductor modeling, quantum chromodynamics, molecular geometry, real-time vision and speech recognition, graphics, CAD, space exploration, global-warming etc.
- Demand for insatiable CPU cycles (need large-scale supercomputers)
Commercial sector
- Slightly different story
- Transactions per minute (tpm)
- Scale of computers is much smaller
- 4P machines to maybe 32P servers
- But use of parallelism is tremendous
- Need to serve as many transaction threads as possible (maximize the number of database users)
- Need to handle large data footprint and offer massive parallelism (also economics kicks in: should be low-cost)
Desktop market
- Demand to improve throughput for sequential multi-programmed workload
- I want to run as many simulations as I can and want them to finish before I come back next morning
- Possibly the biggest application for small-scale multiprocessors (e.g. 2 or 4-way SMPs)
- Even on a uniprocessor machine I would be happy if I could play AOE without affecting the performance of my simulation running in background (simultaneous multi-threading and chip multi-processing; more later)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|