Amazon Appflow: Bidirectional sync b/w 2 Salesforce ORGs

604 Views Asked by At

I need to create a bidirectional sync b/w 2 Salesforce ORGs using Amazon appflow. All the relationship records needs to be in sync as well b/w these ORGs. I'm thinking to create External ID on each of these records for all objects for which flows are created to make sure relationships are preserved across ORGs. What is the best way to do bidirectional-sync ?

1

There are 1 best solutions below

0
On

AWS already has fairly similar article on this: https://aws.amazon.com/blogs/apn/using-amazon-appflow-to-achieve-bi-directional-sync-between-salesforce-and-amazon-rds-for-postgresql/

Although, my architecture below is serverless as I'm cheap and don't care for EC2 costs.

enter image description here

I recommend one source of truth in whatever you're doing. I'd personally do centralized DynamoDB with all the field/values you're intending per object. Then you can have event-driven lambdas to push data to S3 CSVs. Then those CSV updates get pushed via AppFlow for you.

You should have a single DynamoDB table in all of this. Or a separate table for each object but I'm not seeing the advantage of multiples. You only need one S3 bucket. Just need multiple folders.

Your DB structure would be something like below:

{ 
    "ID" : randomly generated GUID,
    "SF_ID": "Salesforce ID",
    "DEST_SF_ID" : "SF ID when created in the other org",
    "SOURCE_ORG": "SOURCE_ORG_ID",
    "record_details" : {
       *ALL THE SF FIELDS*
    }
}

S3 Folder Structure:

root/
    SF_ORG_1 /
        Inbound
        Outbound
    SF_ORG_2 /
        Inbound
        Outbound

You'd need a Lambda to consume the DynamoDB trigger events and know which S3 bucket folder to push to.

You'd need another Lambda to consume events of the S3 buckets. You can have simple branching in one lambda to know if S3_Bucket_Folder_1 is from Org_1 and S3_Bucket_Folder_2 is from Org_2. This would sync up the DynamoDB and know to push a CSV to the other bucket folder.

To make sure you don't have cyclical calls on the Lambdas, make sure you have directories for inbound and outbound pushes. The Flows allow you to set the Bucket prefix.

Then you just listen for create, update, and delete events. I personally haven't dealt with deletion event in AppFlow but worst-case you're just gonna make a Connected App and use Salesforce REST API to call delete.