Review Board 2.0.15


Models the request and response bandwidth between O3 and L1 (cache read and write ports)

Review Request #1422 - Created Sept. 14, 2012 and updated - Latest diff uploaded

Information
Amin Farmahini
gem5
Reviewers
Default
I made some changes to O3 to model the bandwidth between O3 and L1. By bandwidth I mean the number of requests and responses sent or received each cycle (not the amount of data transferred). I limit both the number of requests sent by O3 and the number of responses received by O3.

For REQUESTS:
I have a separate read requests (loads) counter and a separate write requests (stores) counter and a separate shared requests (read/write) counter.
LOADS: O3 limits the number of read requests sent each cycle to the number of defined cache read ports.
STORES: Similarly, O3 limits the number of write requests sent each cycle to the number of defined cache write ports.
Also, shared ports can be used for read/write requests.
Note that no matter how many ports are defined, we have only a single actual cache port used for all read and write requests. So just like the current gem5 code, only one dcachePort is defined in the code. However, I limit the number of requests to the number of cache ports defined in parameters.

For RESPONSES:
If there are not enough cache ports, response packets are buffered in the port (cache side) and are sent to the processor next cycle.

I don't believe what I implemented is the best way to model cache ports here, so your feedback would be appreciated.
a few small benches done only in SE and classic