Module 4: "Recap: Virtual Memory and Caches"
  Lecture 8: "Cache Hierarchy and Memory-level Parallelism"
 

MLP and memory wall

  • Today microprocessors try to hide cache misses by initiating early prefetches:
    • Hardware prefetchers try to predict next several load addresses and initiate cache line prefetch if they are not already in the cache
    • All processors today also support prefetch instructions; so you can specify in your program when to prefetch what: this gives much better control compared to a hardware prefetcher
  • Researchers are working on load value prediction
  • Even after doing all these, memory latency remains the biggest bottleneck
  • Today microprocessors are trying to overcome one single wall: the memory wall