GPG Decrypt using AWS Transfer Family and Preserve Folder Structure

273 Views Asked by At

I am trying to decrypt a file uploaded via sFTP to an S3 bucket and preserve the folder structure of the s3 key.

I have a gpg-encrypted file being uploaded via sFTP to an S3 bucket. The customer uploads a file with a certain folder structure (which I am relying on for metadata), so they might upload a file that appears like this: customer/folder1/file1.xlsx.gpg.
or another file that appears like this: customer/folder2/file2.xlsx.gpg

I want to decrypt these files so that their s3 keys are customer/folder1/file1.xlsx
and customer/folder2/file2.xlsx

but I only see the option to use ${Transfer:User Name} when parameterizing the file location of the decrypt step, so I end up with customer/file1.xlsx and customer/file2.xlsx instead and lose the folder structure.

Is there a way to do this?

1

There are 1 best solutions below

0
On

For anyone else finding limitations with AWS Transfer Family, the solution I have come up with is to store the gpg keys in a secret key, process the S3 trigger sent when .gpg file is placed in the bucket, read the gpg file from the S3 bucket as a stream, decrypt it using a python gpg client and the stored key (which is looked up based on the folder structure of the gpg file), then store the decrypted file in the S3 bucket, preserving the folder structure. A second S3 trigger will be sent upon creation of this file, and my lambda can then process that trigger and process the decrypted file normally.

I have discovered that with the python API for S3, you can store metadata with an object, but I don't believe this is doable if a file is being placed via sFTP. So I think I'm stuck relying on folder structure for metadata.