I've been wondering how to properly time the driving of input signals for a device under test on a systemVerilog testbench, for example here is an extract of a testbench for a simple sequential dut with input clk and rst signals:
always #1 clk = ~clk;
initial begin
clk = 0;
rst = 1;
@(posedge clk);
@(posedge clk);
rst = 0;
#50 $finish;
end
i have to use @(posedge clk) twice to ensure that the rst signal is read by the dut, otherwise with a single @(posedge clk) the device fails to reset and its output remains X for the duration of the simulation, another workaround is to use @(negedge clk) to time the lowering of rst.
Despite the workarounds described they seem hacky and not the proper way of doing things, i've been reading up on the scheduling of events in different regions of the simulation time slot from chapter 4 of the IEEE std 1800-2017 and tried different variations with blocking and non blocking assignments to no avail.
So i want to ask how (if at all possible) to schedule the driving of signals right after and right before the edge of a clock.
It is perfectly normal to assert reset for more than a single clock cycle, especially if not every piece of logic gets tied to the reset signal.
You should be using nonblocking assignments to your testbench signals to avoid races in your DUT. The same rule applies to signals within your design.