> To emulate DL workloads without actual GPUs, we replace GPU-
related steps (steps #3–5, and Step 2 if GPU-based) with simple
sleep(T) calls, where T represents the projected time for each step.
This is a model (of GPU arch/system/runtime/etc) being used to feed downstream analysis. Pretty silly because if you're going to model these things (which are extremely difficult to model!) you should at least have real GPUs around to calibrate/recalibrate the model.
This is a model (of GPU arch/system/runtime/etc) being used to feed downstream analysis. Pretty silly because if you're going to model these things (which are extremely difficult to model!) you should at least have real GPUs around to calibrate/recalibrate the model.