mem: MSHR livelock bug fix
Review Request #2784 - Created May 11, 2015 and updated
| Information | |
|---|---|
| Tony Gutierrez | |
| gem5 | |
| default | |
| Reviewers | |
| Default | |
Changeset 10841:b015addd7b9d
---------------------------
mem: Ruby response timingThis patch ensures that Ruby responses to the CPU core are not unnecessarily
delayed. The original code delays Ruby responses by a tick, causing the core
to receive them a cycle later, rather than in the same cycle. Hence, the
throughput of back-to-back stores that hit in the L1 are reduced by
half because the O3 must wait for the acknowledgement of a prior store before
issuing the next store. This patch eliminates the performance bug.This patch was created by Bihn Pham during his internship at AMD.
While Andreas Hansson should have a better understanding of whether this correct or not, the explanation does not read like a live lock to me. It just seems that the second store is being delayed. Livelock means that some superficial work is going on even though actual progress is not being made.
I do not understand the livelock scenario based on the description. Is the patch trying to allow multiple loads/stores per cycle, or is it fixing an issue?
Description: |
|
|---|
This is dangerous.
The whole idea is that we do not send things in 0 time (infinite throughput). Admittedly the +1 is a poor-mans version of a delta-delay, but I fear this interacts with a lot of things. What is the impact on (classic) cache performance, the other CPUs etc?
