RadosGW returns NoSuchBucket error for public URL's when integrated with OpenStack Keystone

1.3k Views Asked by At

Im trying to integrate ceph RadosGW with OpenStack Keystone. Everything is working as expected, but when I try to reach public buckets with public link generated in Horizon, I get a permanent error ‘NoSuchBucket’. However, this bucket and all it’s content does exists: I can access it as authenticated user in Horizon, I can access it as authenticated user via S3 browser/aws cli, I can see it with radosgw-admin bucket list --bucket . We are running OpenStack Rocky and this issue appears to be with Ceph Octopus 15.2.4 (there was no issues with RGW on Nautilus and Luminous). Here is my configuration file:

<...>
[client.rgw.ceph-hdd-9.rgw0]
host = ceph-hdd-9
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-hdd-9.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-ceph-hdd-9.rgw0.log
rgw frontends = beast endpoint=<IP Address>:8080
rgw thread pool size = 512

rgw zone = default

rgw keystone api version = 3
rgw keystone url = https://<keystone url>:13000
rgw keystone accepted roles = admin, _member_, Member, member, creator, swiftoperator
rgw keystone accepted admin roles = admin, _member_
#rgw keystone token cache size = 0
#rgw keystone revocation interval = 0
rgw keystone implicit tenants = true
rgw keystone admin domain = default
rgw keystone admin project = service
rgw keystone admin user = swift
rgw keystone admin password = swift_osp_password
rgw s3 auth use keystone = true
rgw s3 auth order = local, external
rgw user default quota max size = -1
rgw swift account in url = true
rgw dynamic resharding = false
rgw bucket resharding = false
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1
rgw verify ssl = false

Any thoughts/help/suggestions/ideas - are very appreciated.

UPD: Here what I found in RadosGW log For faulty public buckets

For the bucket with public URL (which fails): 2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 req 115 0s swift:list_bucket rgw::auth::swift::DefaultStrategy: trying rgw::auth::swift::SwiftAnonymousEngine 2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 req 115 0s swift:list_bucket rgw::auth::swift::SwiftAnonymousEngine granted access 2020-08-03T16:26:54.317+0000 7fd4d6c9a700  2 req 115 0s swift:list_bucket normalizing buckets and tenants 2020-08-03T16:26:54.317+0000 7fd4d6c9a700 10 s->object= s->bucket=containerA 2020-08-03T16:26:54.317+0000 7fd4d6c9a700  2 req 115 0s swift:list_bucket init permissions 2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 get_system_obj_state: rctx=0x7fd59fe3ab18 obj=default.rgw.meta:root:containerA state=0x55bccaea2e20 s->prefetch_data=0 2020-08-03T16:26:54.317+0000 7fd4d6c9a700 10 cache get: name=default.rgw.meta+root+containerA : expiry miss 2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 cache put: name=default.rgw.meta+root+containerA info.flags=0x0 2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 adding default.rgw.meta+root+containerA to cache LRU end 2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 req 115 0.001000010s init_permissions on :[]) failed, ret=-2002

For the same bucket accessing by a keystone user (from Horizon)

2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 req 109 0s swift:list_bucket rgw::auth::keystone::TokenEngine granted access 2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state: rctx=0x7fd59fe3b778 obj=default.rgw.meta:users.uid:7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077 state=0x55bcca5cc0a0 s->prefetch_data=0 2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 cache get: name=default.rgw.meta+users.uid+7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077 : hit (requested=0x6, cached=0x7) 2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state: s->obj_tag was set empty 2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 cache get: name=default.rgw.meta+users.uid+7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077 : hit (requested=0x1, cached=0x7) 2020-08-03T16:24:14.853+0000 7fd4f24d1700  2 req 109 0s swift:list_bucket normalizing buckets and tenants 2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 s->object= s->bucket=7c0fddbf5297463e9364ee3aed681077/containerA 2020-08-03T16:24:14.853+0000 7fd4f24d1700  2 req 109 0s swift:list_bucket init permissions 2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state: rctx=0x7fd59fe3ab18 obj=default.rgw.meta:root:7c0fddbf5297463e9364ee3aed681077/containerA state=0x55bcca5cc0a0 s->prefetch_data=0

Any ideas, please?

2

There are 2 best solutions below

1
On

V14.2.12 NAUTILUS add negative cache to the system object(https://github.com/ceph/ceph/pull/37460)

then find a bug(https://tracker.ceph.com/issues/48632)

and octopus v15.2.9 fix this(https://github.com/ceph/ceph/pull/38971)

0
On

We had the same issue, it was related to a bug that was backported to Octopus v15.2.8 and Nautilus v14.2.12 so make sure to run that or a later version.