boto3 gives InvalidBucketName error for valid bucket names on S3 with custom url

1.6k Views Asked by At

I am trying to write a python script for basic get/put/delete/list operations on S3. I am using a cloudian S3 object storage and not AWS. To set the boto3 resource, I set the endpoint and keys like this -

URL = 'http://ip:80'

s3_resource = boto3.resource ('s3', endpoint_url=URL,
   aws_access_key_id = ACCESS_KEY,
   aws_secret_access_key = SECRET_KEY,
   region_name='region1')

I have created some test buckets MANUALLY with following names that pass valid S3 bucket names constraints:

  • test-bucket-0
  • test-bucket-1
  • sample-bucket
  • testbucket

However, when I try to create a bucket from python code, I get the following error repeatedly -

# >>> client.list_buckets()
# Traceback (most recent call last):
#   File "<stdin>", line 1, in <module>
#   File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 357, in _api_call
#     return self._make_api_call(operation_name, kwargs)
#   File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 676, in _make_api_call
#     raise error_class(parsed_response, operation_name)
# botocore.exceptions.ClientError: An error occurred (InvalidBucketName) when calling the ListBuckets operation: The specified bucket is not valid.

Being very new to boto3, I am really not sure what boto3 is expecting. I have tried various combinations for creating connections to the S3 service such as using client instead of resource, but the problem is consistent.

A few other S3 connections I tried are these :

s3 = boto3.resource('s3',
        endpoint_url='http://10.43.235.193:80',
        aws_access_key_id = 'aaa',                                                                                                                                              
        aws_secret_access_key = 'sss',
        config=Config(signature_version='s3v4'),
        region_name='region1')
conn = boto3.connect_s3(                                                                                                                                                       
    aws_access_key_id = 'aaa',                                                                                                                                              
    aws_secret_access_key = 'sss',                                                                                                                                       
    host = '10.43.235.193',                                                                                                                                                       
    port = 80,                                                                                                                                                              
    is_secure = False,                                                                                                                                                        
) 
from boto3.session import Session
session = Session(
    aws_access_key_id='aaa',
    aws_secret_access_key='sss',
    region_name='region1'
)

s3 = session.resource('s3')
client = session.client('s3', endpoint_url='http://10.43.235.193:80') # s3-region1.example.com
s3_client = boto3.client ('s3', 
   endpoint_url=s3_endpoint,
   aws_access_key_id = 'aaa',
   aws_secret_access_key = 'sss',
   region_name='region1')

The python-script is running inside a container and the same pod that runs s3 container. Therefore IP is accessible from 1 container to another. How should I solve this problem?

1

There are 1 best solutions below

0
On BEST ANSWER

My finding is very weird. Having an error as InvalidBucketName is super misleading though and I found lots of many thread on boto3 github about this. But as it turns out, most of the users are of AWS and not on-prem private cloud S3 so that did not help much.

For me, having IP ex. 10.50.32.5 as an S3 endpoint in the configuration while creating s3_client is not working. Therefore having endpoint set like this -

s3_client = boto3.client ('s3', 
   endpoint_url='http://10.50.32.5:80',
   aws_access_key_id = 'AAA',
   aws_secret_access_key = 'SSS',
   region_name='region1')

is failing.

How did I fix this?

I added a DNS entry into /etc/hosts; i.e. a mapping of IP and S3-endpoint URL like this -

10.50.32.5   s3-region1.example.com

And then created an S3 client using boto like this -

s3_client = boto3.client ('s3', 
   endpoint_url=s3_endpoint,
   aws_access_key_id = 'AAA',
   aws_secret_access_key = 'BBB',
   region_name='region1')

And it worked.