Terraform + AWS S3 + Python: AccessDenied when calling the PutObject operation

116 Views Asked by At

I'm using Terraform to Create and S3 Private Bucket with IAM user & Keys to access to this Bucket. Here is my Terraform code.

module "s3_bucket" {
  source  = "cloudposse/s3-bucket/aws"
  version = "4.0.0"
  name    = local.bucket_name

  acl                = "private"
  enabled            = true
  user_enabled       = true
  force_destroy      = true
  versioning_enabled = true
  sse_algorithm      = "AES256"

  block_public_acls             = true
  allow_encrypted_uploads_only  = true
  allow_ssl_requests_only       = true
  block_public_policy           = true
  ignore_public_acls            = true
  cors_configuration            = [
    {
       allowed_origins = ["*"]
       allowed_methods = ["GET", "PUT", "POST", "HEAD", "DELETE"]
       allowed_headers = ["Authorization"]
       expose_headers  = []
       max_age_seconds = "3000"
    }
  ]

  allowed_bucket_actions        = ["s3:*"]
  lifecycle_configuration_rules =  []
}

resource "aws_secretsmanager_secret" "s3_private_bucket_secret" {
  depends_on              = [module.s3_private_bucket]
  name                    = join("", [local.bucket_name, "-", "secret"])
  recovery_window_in_days = 0
}

resource "aws_secretsmanager_secret_version" "s3_private_bucket_secret_credentials" {
  depends_on    = [module.s3_private_bucket]
  secret_id     = aws_secretsmanager_secret.s3_private_bucket_secret.id
  secret_string = jsonencode({
    KEY    = module.s3_private_bucket.access_key_id
    SECRET = module.s3_private_bucket.secret_access_key
    REGION = module.s3_private_bucket.bucket_region
    BUCKET = module.s3_private_bucket.bucket_id
  })
}

After running above code, i can a new user has been created in IAM with name x-rc-bucket with access key and secret same as stored in Secret manager and has policy attached as

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::x-rc-bucket/*",
                "arn:aws:s3:::x-rc-bucket"
            ]
        }
    ]
}

Then i'v a simple python scripts which try to upload a file to s3 bucket using keys from above secret manager;

import os
import boto3

image = "x.jpg"
s3_filestore_path = "images/x.jpg"
filename, file_extension = os.path.splitext(image)
content_type_dict = {
    ".png": "image/png",
    ".html": "text/html",
    ".css": "text/css",
    ".js": "application/javascript",
    ".jpg": "image/png",
    ".gif": "image/gif",
    ".jpeg": "image/jpeg",
}
content_type = content_type_dict[file_extension]
s3 = boto3.client(
    "s3",
    config=boto3.session.Config(signature_version="s3v4"),
    region_name="eu-west-3",
    aws_access_key_id="**",
    aws_secret_access_key="**",
)
s3.put_object(
    Body=image, Bucket="x-rc-bucket", Key=s3_filestore_path, ContentType=content_type
)

It throws an error botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.

What I'm looking is that every bucket should have it's own keys and could be accessible to that specific keys only.

0

There are 0 best solutions below