How to connect python s3fs client to a running Minio docker container?

1.1k Views Asked by At

For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.

It seems like I have trouble connecting the s3fs client to the Minio container. Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)

import s3fs
import docker

client = docker.from_env()

container = client.containers.run('minio/minio',
                                  "server /data --console-address ':9090'",
                                  environment={
                                      "MINIO_ACCESS_KEY": "minio",
                                      "MINIO_SECRET_KEY": "minio123",
                                  },
                                  ports={
                                      "9000/tcp": 9000,
                                      "9090/tcp": 9090,
                                  },
                                  volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
                                  detach=True)

container.reload() # why reload:  https://github.com/docker/docker-py/issues/2681

fs = s3fs.S3FileSystem(
    anon=False,
    key='minio',
    secret='minio123',
    use_ssl=False,
    client_kwargs={
        'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
    }
)

===========

>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception

check that the container is running:

➜  ~ docker ps -a
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                                                                                  NAMES
127e22c19a65   minio/minio   "/usr/bin/docker-ent…"   56 seconds ago   Up 55 seconds   0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   hardcore_ride

check that the relevant volume is attached:

➜  ~ docker exec -it 127e22c19a65 bash
[root@127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root@127e22c19a65 /]# exit

Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.

1

There are 1 best solutions below

1
On

What is the bucket name that was created as part of this setup?

From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.

fs.ls('my-bucket')
['my-file.txt']

Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?

https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf