Effect of request randomization before and after start_item() call

746 Views Asked by At

I am trying to implement a UVM Driver for a simple pipelined model using semaphores, fork-join & get()-put() methods in the run_phase of the driver.

The driver part is doing the job fine if only I code the sequence in a particular way. From what I know the body task is coded as below

  Code1:
pkt = packet::type_id::create("pkt");    // Factory create the sequence item  
for(int i=0;i<num_trans;i++)             // Repeat as required
  begin
    assert(pkt.randomize());             // Randomize the sequence item
    start_item(pkt);                     //Send the request to Driver. 
    finish_item(pkt);                    //Wait for the driver to finish the current item

Above style, there's no pipelining achieved and moreover the data beat corresponding to the first transaction packet is lost. When the randomization is invoked after start_item, the test bench works as expected.

Code2:
pkt = packet::type_id::create("pkt");      
for(int i=0;i<num_trans;i++)
  begin
    
    start_item(pkt); 

     assert(pkt.randomize());       
    finish_item(pkt);

I'd like to know what is the difference between coding style 1 and 2

1

There are 1 best solutions below

0
On

This might be happening because on the task start_item() task we are waiting for the following.

sequencer.wait_for_grant(this, set_priority);

so we are waiting for the sequencer to grant the sequence and then sequence_item will be taken, but if you do like the following.

assert(pkt.randomize());  // Randomize the sequence item
start_item(pkt);          //Send the request to Driver. 

that randomization lost because that start_item might be waiting for the sequencer to be free and till that time we lost the randomization.

further you can read the following article, that might help https://verificationacademy.com/forums/uvm/startitem/finishitem-versus-uvmdo-macros