Cross-Account AWS CodePipeline cannot access CloudFormation deploy artifacts

2.6k Views Asked by At

I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV. After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild. CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:

CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP

The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.

Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?

Everything works when executed from within a single account.

Here is my code snippet (deployed in the CI account):

  MyCodeBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
      Environment: ...
      Name: !Sub "my-codebuild"
      ServiceRole: !Ref CodeBuildRole
      EncryptionKey: !GetAtt KMSKey.Arn
      Source:
        Type: CODEPIPELINE
        BuildSpec: ...

  CrossAccountCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      Name: "my-pipeline"
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        ...
      - Name: StagingDev
        Actions:
        - Name: create-stack-in-DEV-account
          InputArtifacts:
          - Name: SourceArtifact
          OutputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: "1"
            Provider: CloudFormation
          Configuration:
            StackName: "my-dev-stack"
            ChangeSetName: !Sub "my-changeset"
            ActionMode: CREATE_UPDATE
            Capabilities: CAPABILITY_NAMED_IAM
            # this is the artifact I want to access from the next action 
            # within this CI account pipeline
            OutputFileName: "my-DEV-output.json"   
            TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
            RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
          RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
          RunOrder: 1
        - Name: process-DEV-outputs
          InputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: "1"
            Provider: CodeBuild
          Configuration:
            ProjectName: !Ref MyCodeBuild
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref S3ArtifactBucket
        EncryptionKey:
          Id: !GetAtt KMSKey.Arn
          Type: KMS
6

There are 6 best solutions below

0
On

I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.

  1. Make sure your S3 bucket is using a CMK, not the default encryption key.
  2. Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).

It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.

1
On

CloudFormation generates output artifact, zips it and then uploads the file to S3. It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.

workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.

7
On

CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey

Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.

This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

0
On

This question is very old. But today i faced the same exact issue I spend hours and hours trying to fix that.

as @mockora mentioned CloudFormation generates output artifact, zips it and then uploads the file to S3. It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.

To solve it, all you need to do is to enforce the object ownership on your s3 bucket (where cloudformation save the artifacts)

Cloudformation example of enforc a bucket ownership

0
On

We are using Cloudformation template (yml) and we needed to add the following to resolve:

enter image description here

0
On

mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.

In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).

import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

# X-Ray
patch_all()

# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
  return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))

# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')

# S3 Object ACLs Handler
def s3_acl_handler(event, context):
  log.info(f'Received event: {format_json(event)}')
  # Get Job
  jobId = event['CodePipeline.job']['id']
  jobData = event['CodePipeline.job']['data']
  # Ensure we return a success or failure result
  try:
    # Assume IAM role from user parameters
    credentials = sts.assume_role(
      RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
      RoleSessionName='codepipeline',
      DurationSeconds=900
    )['Credentials']
    # Create S3 client from assumed role credentials
    s3 = client('s3',
      aws_access_key_id=credentials['AccessKeyId'],
      aws_secret_access_key=credentials['SecretAccessKey'],
      aws_session_token=credentials['SessionToken']
    )
    # Set S3 object ACL for each input artifact
    for inputArtifact in jobData['inputArtifacts']:
      s3.put_object_acl(
        ACL='bucket-owner-full-control',
        Bucket=inputArtifact['location']['s3Location']['bucketName'],
        Key=inputArtifact['location']['s3Location']['objectKey']
      )
    codepipeline.put_job_success_result(jobId=jobId)
  except Exception as e:
    logging.exception('An exception occurred')
    codepipeline.put_job_failure_result(
      jobId=jobId,
      failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
    )