I have a flat file (.csv) as source with two fields . I have workflow with 5 sessions . Each session should pick 200 records and total 1000 records updated to webservice. In the second run it should pick the records from 1001 to 2000 , here I am providing the sample count. How do we limit rows in each session and how to pick different records from first run.

wf_start time-> session1(flatfile)-->s2,s3,s4,s5,s6(concurrent sessions)

Concurrent session mapping:

mapping

filter condition to limit rows

Is there a way to get another set of 1000 records in the next run from same file in the second workflow run?

Is there way to fetch another set of 1000 records in the next run we can configure?

1

There are 1 best solutions below

1
On

You need a table to store the 1000rows that already processed.

Main map will have this new table as target. And it will b used as lkp and filter as well. New table will have only key column of main table and a flag already processed. Your mapping will b like this

Sq --> exp (lkp on newtable) --> fil ( if lookup return null- pass) --...> originalTgt
                               > Newtable 

Lookup will be on key column and already processed<> Yes.