How to Provision Application S3 Buckets Using Rook-Ceph
This guide walks you through the steps to provision S3-compatible buckets in a Rook-Ceph environment.
These tasks are intended to be completed as an administrator.
Prerequisites
- Rook cluster deployed with Ceph Object Storage (RGW)
- At least 3 OSD pods on separate nodes
kubectl
configured to interact with your cluster
Step 1: Create a CephObjectStore
Run the following command to confirm the RGW pods are up:
kubectl -n rook-ceph get pod -l app=rook-ceph-rgw
Apply the CephObjectStore YAML configuration:
kubectl apply -f ceph-object-store.yaml
Your ceph-object-store.yaml
should look similar to this:
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
gateway:
port: 80
instances: 1
Step 2: Create a Storage Class
Create a storage class to enable clients to provision buckets.
Apply the StorageClass YAML configuration:
kubectl apply -f storage-class.yaml
Your storage-class.yaml
should look similar to this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-bucket
provisioner: rook-ceph.ceph.rook.io/bucket
parameters:
objectStoreName: my-store
objectStoreNamespace: rook-ceph
If your Rook operator is in a different namespace, adjust the provisioner
value to match that namespace.
Verifying Your Setup
To ensure everything is set up correctly, run the following:
kubectl get storageclass
kubectl -n rook-ceph describe CephObjectStore my-store
FAQs
How can I specify different failure domains?
You can modify the failureDomain
field in the CephObjectStore
YAML.
What if I have more than 3 OSD Pods?
You can specify a larger size
in the replicated
section of the CephObjectStore
.
Additional Resources
By following these steps, you'll be able to provision application S3 buckets using Rook-Ceph. For additional configurations and advanced setups, refer to the official Rook and Ceph documentation.