I have an app which is a Digital Asset Management system. It displays thumbnails. I have these thumbnails set up to be served with AWS S3 presigned urls: https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURLJavaSDK.html. This piece of code is working, until I change how many items get processed through the request. The application has selections for 25, 50, 100, 200. If I select 100 or 200 the process will fail with "Error: com.amazonaws.AmazonServiceException: Too Many Requests (Service: null; Status Code: 429; Error Code: null; Request ID: null)"
Right now the process is as follows: Perform a search > run each object key through a method that returns a presigned url for that object.
We run this application through Elastic Container Service which allows us to pull in credentials via ContainerCredentialsProvider.
Relevant code for review:
String s3SignedUrl(String objectKeyUrl) {
// Environment variables for S3 client.
String clientRegion = System.getenv("REGION");
String bucketName = System.getenv("S3_BUCKET");
try {
// S3 credentials get pulled in from AWS via ContainerCredentialsProvider.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ContainerCredentialsProvider())
.build();
// Set the pre-signed URL to expire after one hour.
java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);
// Generate the presigned URL.
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKeyUrl)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
return s3Client.generatePresignedUrl(generatePresignedUrlRequest).toString();
} catch (AmazonServiceException e) {
throw new AssetException(FAILED_TO_GET_METADATA, "The call was transmitted successfully, but Amazon " +
"S3 couldn't process it, so it returned an error response. Error: " + e);
} catch (SdkClientException e) {
throw new AssetException(FAILED_TO_GET_METADATA, "Amazon S3 couldn't be contacted for a response, or " +
"the client couldn't parse the response from Amazon S3. Error: " + e);
}
}
And this is the part where we process the items:
// Overwrite the url, it's nested deeply in maps of maps.
for (Object anAssetList : assetList) {
String assetId = ((Map) anAssetList).get("asset_id").toString();
if (renditionAssetRecordMap.containsKey(assetId)) {
String s3ObjectKey = renditionAssetRecordMap.get(assetId).getThumbObjectLocation();
((Map) ((Map) ((Map) anAssetList)
.getOrDefault("rendition_content", new HashMap<>()))
.getOrDefault("thumbnail_content", new HashMap<>()))
.put("url", s3SignedUrl(s3ObjectKey));
}
}
Any guidance would be appreciated. Would love a solution that is simple and hopefully configurable on the AWS side. Otherwise, right now I am looking at adding a process for this to generate the urls in batches.
The problem is unrelated to generating pre-signed URLs. These are done with no interaction with the service, so there is no possible way it could be rate-limited. A pre-signed URL uses an HMAC-SHA algorithm to prove to the service that an entity in possession of the credentials has authorized a specific request. The one-way (non-reversible) nature of HMAC-SHA allows these URLs to be generated entirely on the machine where the code is running, with no service interaction.
However, it seems very likely that repeatedly fetching the credentials is the actual cause of the exception -- and you appear to be doing that unnecessarily over and over.
This is an expensive operation:
Each time you call this again, the credentials have to be fetched again. That's actually the limit you're hitting.
Build your
s3client
only once, and refactors3SignedUrl()
to expect that object to be passed in, so you can reuse it.You should see a notable performance improvement, in addition to resolving the
429
error.