Why are objects uploaded to a bucket showing different storage class than default class?

100 Views Asked by At

Objects uploaded to a bucket has different storage class than the default storage class.

I have a Synology NAS. I am using the Hyper Backup application to create an offsite backup of my NAS data. One of the option is to use an S3 or S3 compatible bucket to store the data. I am using Google Clouds Archive storage class. I created a bucket with a default class of Archive and set up the backup task. The backup task has started and will take some time to finish.

Now when I look at any object inside the bucket, the storage class is shown as Standard. Below is an example screenshot. I am not really clear why are the objects being uploaded as Standard. The pricing of Standard object is more than 16 times than a Archive object, so the whole task is losing its value pretty fast.

Bucket class as shown in Google Cloud console

Any idea on why is it happening? And how I can configure it so that the objects are uploaded to Archive storage class?

1

There are 1 best solutions below

0
hkanjih On

When you upload an object to a bucket, it usually takes on the bucket's default storage class. However, some services or applications can specify a different storage class during upload. In these cases, the uploaded object will take on the storage class specified during the upload, not the default storage class of the bucket.

To ensure that your objects are uploaded to the Archive storage class, you can do one of two things:

  1. Configure the upload application. In your case, the Hyper Backup application on the Synology NAS is uploading the data. You may need to configure it to specify the Archive storage class during the upload if it supports this functionality.

  2. Use Object Lifecycle management rules. You can create an Object Lifecycle management rule on the bucket that changes the storage class of objects to Archive after they've been uploaded. For example, you can set a rule to change the storage class of any new objects to Archive one day after creation.

Note that: moving data into the Archive Storage class incurs costs for moving the data and also requires the objects to be stored at least 365 days. Use Object Lifecycle management rules. is a good option if Hyper Backup does not offer the functionality to specify storage class during upload.

Here's how you can set up a lifecycle rule:

{
  "lifecycle": {
    "rule": [
      {
        "action": {
          "type": "SetStorageClass",
          "storageClass": "ARCHIVE"
        },
        "condition": {
          "age": 1
        }
      }
    ]
  }
}