Automatic deserialisation of S3 Events in Lambda

277 Views Asked by At

I have written a lambda handler that takes a POJO as parameter and tries to print it. When I put an json file in the configured s3 bucket, the event gets triggered, however all the values in the pojo are null. But when I use S3Event as a parameter, I get the expected values (bucket name, key etc).

So my question is - has anyone seen automatic deserialisation working with S3 and java in AWS Lambdas.

Here is my simplified handler

public void handleRequest(ConsolidationJob job, Context context) {
    logger.info("Triggered job: " + job.toString());
}

Here is my pojo

public class ConsolidationJob {

   private List<String> files;

   public List<String> getFiles() {
    return files;
  }

  public void setFiles(List<String> files) {
    this.files = files;
  }

  @Override
public String toString() {
    return "ConsolidationJob{" +
            "files=" + files +
           '}';

And the when I drop the following json

{
"files": [
    "ni-aq-si-ui-i-4c2c11e1.cloud-newsint.co.uk.1454377894220.avro",
    "ni-aq-si-ui-i-4c2c11e1.cloud-newsint.co.uk.1454378378408.avro",
    "ni-aq-uat-ui-i-55c6c7de.cloud-newsint.co.uk.1454376507474.avro"
}

in the logs I see

2017-01-04 12:10:38 <cd42ce01-d276-11e6-bb24-2776804ea5f4> INFO uk.co.avro.FilesConsolidator:40 - Triggered job: ConsolidationJob{files=null}

But when I change my lambda to

public void handleRequest(S3Event job, Context context) {
    logger.info("Triggered s3event: " + job.toJson());
}

It is being invoked with the right event from which I can get the bucket name and the key to go ahead and pull the file. But that means I have to do the deserialisation manually.

1

There are 1 best solutions below

0
On

Looks like it is not possible mainly because the content in S3 can be any type not just json.

Eventually I used the second approach of using S3Event as a parameter, getting the bucket and key from it and then using Jackson to parse the inputstream to my POJO