I think that using S3-compatible to do file storage for application assets or "attachments" is one of the most common use cases, but I see a couple of issues, which I am unclear on how to address.
If you are going to serve a fronted that can be cached or that is purely an HTML/JS project, it's clear that you could use the storage (+ CDN) to host it, but when storing files from the users I see that the options are:
Only the application has access to the bucket, and does a "pass-through" of the requested resource.
The application has written access, but read is public, meaning an application receives a resource and stores it to a bucket, and the client fetches a reference from the application to identify the resource on the bucket.
The application manages access, either by setting an OpenID type of model, or providing short-lived tokens to access a given object, and the client does everything with explicit permission from the application and by getting the correct reference to transact against.
I think the simple way is to use option number 2, I've seen FOSS project do that whenever they support S3-compatible storage.
But I am unsure on what could be good criteria to decide.
The main issue I see is that using approach 1, allows the application to be transparent to the client, while introducing a possible bottleneck when requesting multiple files.
And regarding number 3, sounds too sophisticated to implement in a POC.
Any thoughts or comments about what is the best practice regarding this scenario?
Use approach #3.
You can use a Amazon S3 pre-signed URL, which provides time-limited access to private objects in Amazon S3.
Basically:
Therefore, your app has responsibility for authenticating the users and determining which objects they are entitled to access. It generates the pre-signed URL and provides it to the user. The pre-signed URL can then be used as a link or even in
<img src="...">references.