kvm: Support timing accesses for KVM cpu
Review Request #3619 - Created Aug. 16, 2016 and submitted
| Information | |
|---|---|
| Michael LeBeane | |
| gem5 | |
| default | |
| Reviewers | |
| Default | |
Changeset 11561:4595cc3848fc --------------------------- kvm: Support timing accesses for KVM cpu This patch enables timing accesses for KVM cpu. A new state, RunningMMIOPending, is added to indicate that there are outstanding timing requests generated by KVM in the system. KVM's tick() is disabled and the simulation does not enter into KVM until all outstanding timing requests have completed. The main motivation for this is to allow KVM CPU to perform MMIO in Ruby, since Ruby does not support atomic accesses.
Thanks for implementing this! Overall, I think this is the right approach. However, I think you could refactor it a bit to make it cleaner. I think you could move most of the new packet handling logic to the KVMCpuPort, this would largely hide the fact that it support both timing and atomic from the rest of the CPU. I'd suggest something along these lines:
* Tick submitIO(pkt): Send the packet (and cleanup storage in atomic). Return the delay (0 for timing). * Statis nextIOState(): Return RunningMMIOPending if there are pending timing accesses, RunningServiceCompletion otherwise. * recvReqRetry(): Resubmit the deferred packet. * recvTimingResp(pkt): Clean up pkt and try to submit the next deferred packet. When the last packet has received a response, call cpu->finishTiming() which updates the state of the CPU.Using these helper functions, the MMIO handlers would be almost identical to before. Instead of calling port->submitAtomic(pkt) you'd call port->submitIO(pkt). Before exiting the IO handler, you set the state to port->nextIOState().
-
src/cpu/kvm/base.cc (Diff revision 1) -
I think you need to submit the next deferred packet here.
-
src/cpu/kvm/base.cc (Diff revision 1) -
Refactor this into a helper function that takes a packet and returns a delay. You're using the exact same code when handling an IO instruction on x86.
-
src/cpu/kvm/x86_cpu.cc (Diff revision 1) -
Since you potentially submit multiple timing packets, you can't reuse the req without ending up in a double free situation.
Overall it looks great. Some concerns regarding packet ordering and flow-control handling.
-
src/cpu/kvm/base.hh (Diff revision 2) -
a stack?
-
src/cpu/kvm/base.hh (Diff revision 2) -
unsigned int?
-
src/cpu/kvm/base.cc (Diff revision 2) -
if we already have queued packets, should this not be added FIFO?
also, if we are waiting for a retry, we should not attempt to send a new packet
-
src/cpu/kvm/base.cc (Diff revision 2) -
This is odd. We did not receive a retry, we received a response.
We should not call sendTiming again until we get a retry. It only wastes cycles as we will make no progress.
-
src/cpu/kvm/base.cc (Diff revision 2) -
Conceptually this is not very nice, as it assumes infinite throughput.
I see how we save an even this way, but I am tempted to suggest the inclusion of an even rather.
-
src/cpu/kvm/base.cc (Diff revision 2) -
I would prefer if we could keep the bypass caches check. Classic memory will definitely break if it isn't executing in cache bypass mode. I don't know if it is worth fixing, but I also don't like the fact that we can get really weird errors if this is removed.
I'm not sure how this would interact with your Ruby patches though. You might need to add a timing cache bypass mode.
-
src/cpu/kvm/base.cc (Diff revision 2) -
This is a bit of a nit, but would it be possible to move the state transitions to this switch statement instead of distributing them? I think that'll keep the state machine more maintainable in the future.
You'll probably need to call the handler before updating the state in that case.
Well, having played around with this solution for KVM MMIO + Ruby a bit it does work (with the addition of a timing_noncacheable state as Andreas S. noted), but not very well. If you put the system in timing mode you get events like DRAM refresh that make it so you can't stay in KVM very long, which kinda defeats the purpose. Any non-hacky ideas how to get around this?
Ship It!
Andreas H., were your concerns resolved on this patch? Thanks!
Looks great. Thanks for addressing all the comments.
