An error occurred (413) when calling the BatchExecuteStatement operation using AWS Aurora Serverless DATA API

3.2k Views Asked by At

I writing a pythonc script that uses the boto3 python library to query Aurora Serverless (PostgreSQL) database. I am using the DATA API to batch insert (I am doing that in multiple batches) a very large CSV file containing over 6 million records to the DB. Each record contains 37 columns. When I run the script on my PC (I have set up the AWS-CLI credentials so that I use a user authorised to talk to the Aurora DB in the cloud) I am able to insert successfully several batches containing 1800 SQL insert statements before I get that error:

An error occurred (413) when calling the BatchExecuteStatement operation

Python script:

import boto3
import csv
rds_client = boto3.client('rds-data')

database_name = "postgres"
db_cluster_arn = "arn:aws:rds:us-east-1:xxxxxxxx:cluster:database-1"
db_credentials_secrets_store_arn = "arn:aws:secretsmanager:us-east-1:xxxxxxxxxx:secret:rds-db-credentials/cluster-VPF7JUKVRLQMEHF4QV2HSIKELM/postgres-QVIzWC"

def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None):
    parameters = {
        'secretArn': db_credentials_secrets_store_arn,
        'database': database_name,
        'resourceArn': db_cluster_arn,
        'sql': sql,
        'parameterSets': sql_parameter_sets
    }
    if transaction_id is not None:
        parameters['transactionId'] = transaction_id
    response = rds_client.batch_execute_statement(**parameters)
    return response


def get_entry(row):
    entry = [
        {'name': 'RIG_ID', 'value': {'stringValue': row['RIG_ID']}},
        
    # DEPTH_CAPACITY
    if row['DEPTH_CAPACITY'] == '':
        entry.append({'name': 'DEPTH_CAPACITY',
                      'value': {'isNull': True}})
    else:
        entry.append({'name': 'DEPTH_CAPACITY', 'typeHint': 'DECIMAL', 'value': {
            'stringValue': row['DEPTH_CAPACITY']}})
    # MANY MORE ENTRIES HERE
    return entry

def execute_transaction(sql, parameter_set):
    transaction = rds_client.begin_transaction(secretArn=db_credentials_secrets_store_arn,resourceArn=db_cluster_arn,database=database_name)
    try:
        response = batch_execute_statement(sql, parameter_set, transaction['transactionId'])
    except Exception as e:
        transaction_response = rds_client.rollback_transaction(
        secretArn=db_credentials_secrets_store_arn,
        resourceArn=db_cluster_arn,
        transactionId=transaction['transactionId'])
    else:
        transaction_response = rds_client.commit_transaction(
        secretArn=db_credentials_secrets_store_arn,
        resourceArn=db_cluster_arn,
        transactionId=transaction['transactionId'])
        print(f'Number of records updated: {len(response["updateResults"])}')
    print(f'Transaction Status: {transaction_response["transactionStatus"]}')

batch_size = 1800
current_batch_size = 0
transaction_count = 0

sql = 'INSERT INTO T_RIG_ACTIVITY_STATUS_DATE VALUES (\
:RIG_ID, :DEPTH_CAPACITY, # MANY MORE ENTRIES HERE);'
    
parameter_set = []

with open('LARGE_FILE.csv', 'r') as file:
    reader = csv.DictReader(file, delimiter=',')
    
    for row in reader:

        entry = get_entry(row)

        if(current_batch_size == batch_size):
            execute_transaction(sql, parameter_set)

            transaction_count = transaction_count + 1
            print(f'Transaction count: {transaction_count}')

            current_batch_size = 0
            parameter_set.clear()
        else:
            parameter_set.append(entry)
            current_batch_size = current_batch_size + 1

From what I understand, the 413 error code means that the `Request Entity Too Large" which would suggest that I need to lower the batch size I am sending to DB (via the DATA API) but I cannot understand why I am able to send several batches of 1800 SQL statements before I start getting the above error. Any suggestions?

Also, what would be the best resolution in my case considering the about of data I need to push/insert to the database?

1

There are 1 best solutions below

0
On

Lowering the batch size-resolved the issue for me. In my case, some of the records in my csv contained more data which was causing the error.