I have found the documentation for handling multipart uploads using the AWS v3 Javascript SDK. Here is the documentation on using the CreateMultipartUploadCommand. They also have an Upload() method that abstracts some of the multipart uploading.
However, I couldn't find any easy way to use the SDK to pause and resume multipart uploads at a later time. Nor could I find a way to transparently handle expiring temporary credentials that were obtained using the AssumeRoleCommand. The maximum length for credentials varies between 1 and 12 hours, and according to the AWS documentation "Role chaining limits your Amazon Web Services CLI or Amazon Web Services API role session to a maximum of one hour." As I am using role chaining I am limited to 1 hour and will need to transparently refresh the credentials if the upload is taking longer than 1 hour.
I ended up just copying the AWS Upload() file (plus its dependent files minus the index files) and modifying the Upload() command to suit my needs. I will post my entire ModifiedUpload.ts file here, but you will need to make adjustments as I am using a custom built S3Service class for the
appS3Service
property. Its only passed in as it has the s3 client, and is used to regenerate a new s3 client. Should be easy enough to change/remove.I would encourage you to download the existing aws files, and then use either a git compare or IDE compare functionality to compare the new ModifiedUpload.ts file with the Upload.ts file so you can easily see what changes I made, and make modifications yourself.
Libraries I use that weren't in the original SDK:
High level overview of changes I made:
Pausing and Resuming
resumeUploadId
to Upload() if you have a multipart upload you wish to resume. See the ListMultiPartUploadsCommand for retrieving a list of uncompleted uploads and their idUpload.done()
method to check forresumeUploadId
, and call a newcheckForPartialUploads()
command, which in turn callslistMultipartUploadRetryWrapper()
listMultipartUploadRetryWrapper()
will pull all your uploaded parts and push them tothis.uploadedParts
, and will then call__notifyProgress()
to let the upstream callers of the progressuploadedParts
was a pre-existing property so once we push our existing parts to it, the rest is already handled.I did update
__doConcurrentUpload()
to check if the signal was aborted (user paused) and all concurrent uploads have finished. In which case it will send a"allUploadsCompletedAfterAbort"
customEvent so your upstream callers can disable the 'resume' button until all pending uploads are complete. In your upstream caller you will want to subscribe to thecustomEvents
property, which can push only two types currently,"allUploadsCompletedAfterAbort"
and"noUploadPartsFound"
. The latter one you can use to display an error to the user indicating that uploading has to restart over.So when a user wants to pause, you can call the
abort()
method on your Upload() object. If a user wants to resume in that same session where you still have the first Upload() object (FirstUpload), you can recreate a newUpload()
object, and pass in the FirstUpload.getUploadId()
as theresumeUploadId
in the secondaryOptions param.If the user refreshes the page in the middle of uploading, or closes and comes back later, then you will need to use the
ListMultiPartUploadsCommand
to retrieve the upload ID you want (let the user pick, you default to the latest, etc) and pass it to the Upload() constructor as the resumeUploadId.Transparent refresh of S3Client/S3 Credentials
s3Url
,objType
(specific to our app and relevant for our s3 path),objId
andappS3Service
properties into the new second parameterUploadSecondaryOptions
. These are just used so our S3Service can generate new credentials. You will want to modify this (or remove if you dont need this transparent refresh of credentials ability).concurrentUploadRetryWrapper
method (created from__doConcurrentUpload
) I use asyn-lock library to acquire a lock ons3credentialskey
, just so that only 1 of our processes is checking credentials at a time.Other changes
You will notice in some places I just added some retry logic for redundancy. If a user was uploading large files I wanted it to be as redundant as possible.