Lorenzo's blog

Technical reference about work stuff

Velero backups on Oracle Cloud Object Storage

🗓️ Date: 2025-01-09 · 🗺️ Word count: 905 · ⏱️ Reading time: 5 Minute

Velero supports generic S3 locations for its backups, provided that they are compatibile with the AWS implementation. Oracle Cloud Object Storage (especially when used in the free tier) can be useful for experimenting with Object Storage for various use cases.

In this post, Velero will be configured to use Oracle Cloud S3-compatibile storage for backing up both regular resources and PV content. If your storage implementation supports disk snapshots, use those: they are faster and more reliable. In all other cases, Velero can create backups of PV content using the “File System Backup” (FSB) feature.

FSB Limitations

The FSB feature can be used to back-up almost every PV, independently its storage class or implementation. However, it comes with some important caveats. The most troublesome are the following.

Prerequisites

  • An Oracle Cloud account (the free tier works fine).
  • A kubernetes cluster where Velero will be installed.

Steps

Oracle Cloud setup

  • Generate a customer secret key in your OCI User settings. Name it something easy to identify and note down the id and the secret values.
  • Create a file on your local machine as the model below, for instance in /home/user/velero-credentials. Replace the id and the secret values.
[default]
aws_access_key_id=[KEY_ID]
aws_secret_access_key=[KEY_SECRET]
  • Obtain your OCI home region and storage namespace values. From the Oracle Cloud portal, open a Cloud Shell. When it’s ready, run
# region
oci iam region-subscription list | jq -r '.data[0]."region-name"' 

# storage namespace
oci os ns get | jq -r .data
  • Create a private bucket, using default settings.

Velero installation

Velero can be installed both via the velero install cli command or via the helm chart. In this tutorial, the cli command will be used.

  • Download the Velero cli on your local machine.
  • Export the KUBECONFIG variable to reference the .kubeconfig file.
  • Install the Velero server component in the cluster running:
velero install \
--provider aws \
--bucket [BUCKET_NAME] \
--prefix velero \
--use-volume-snapshots=false \
--secret-file [path/to/file/credentials] \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--use-node-agent \
--backup-location-config
region=[HOME_REGION],s3ForcePathStyle="true",s3Url=https://[STORAGE_NAMESPACE].compat.objectstorage.[HOME_REGION].oraclecloud.com

Notes:

  • The flag --use-node-agent enables the FSB feature.
  • In this tutorial, path/to/file/credentials should be /home/user/velero-credentials`.
  • Use version v1.0.0 of the aws plugin. Newer versions do not work for some reasons.
  • The value of --prefix is the folder name in the bucket where the backups will be stored.

Example:

In my case, the command used was the following.

velero install \
--provider aws \
--bucket bucket-lab-k3s-velero-demo \
--prefix velero \
--use-volume-snapshots=false \
--secret-file /home/lorenzo/k3s/velero/velero-credentials \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--use-node-agent \
--backup-location-config
region=eu-zurich-1,s3ForcePathStyle="true",s3Url=https://zrwilsgxxxxx.compat.objectstorage.eu-zurich-1.oraclecloud.com

Then, check if all is running fine:

# check if the credentials are correct
kubectl get secret -n velero cloud-credentials -oyaml | yq '.data.cloud' | base64 -d

[default]
aws_access_key_id=xxx...xxx
aws_secret_access_key=xxx...xxx
kubectl get pods -n velero

NAME                                                              READY   STATUS      RESTARTS        AGE
node-agent-h4s6s                                                  1/1     Running     1 (3h39m ago)   26h
node-agent-j29ql                                                  1/1     Running     1 (3h40m ago)   26h
node-agent-j48qh                                                  1/1     Running     1 (3h40m ago)   26h
node-agent-q9b9c                                                  1/1     Running     1 (3h39m ago)   26h
node-agent-xzwbp                                                  1/1     Running     1 (3h39m ago)   26h
velero-d7856dd4d-pd6wv                                            1/1     Running     1 (3h39m ago)   25h
velero get backup-locations

NAME      PROVIDER   BUCKET/PREFIX                            PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        bucket-lab-k3s-velero-demo/velero        Available   2025-01-09 12:57:27 +0100 CET   ReadWrite     true

Test

For the test, I will create a postgres database in a new namespace and insert some data into it. The namespace will be backed up and then deleted. A restore will be created from the backup and then I will check if the data previously inserted are still present.

  1. Postgres installation

Postgres will be installed via helm using the bitnami chart. No read-only replicas are required and only a small PVC will be used.

primary:
  persistence:
    size: 200Mi
readReplicas:
  replicaCount: 0
auth:
  postgresPassword: pippo
  username: pippo
  password: pippo
kubectl create namespace postgres
helm install my-postgresql bitnami/postgresql --version 16.3.5 -f values.yaml --namespace postgres
k get all -n postgres

NAME                  READY   STATUS    RESTARTS   AGE
pod/my-postgresql-0   1/1     Running   0          179m

NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/my-postgresql      ClusterIP   10.43.85.58   <none>        5432/TCP   179m
service/my-postgresql-hl   ClusterIP   None          <none>        5432/TCP   179m

NAME                             READY   AGE
statefulset.apps/my-postgresql   1/1     179m
kubectl get pvc -n postgres

NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-my-postgresql-0   Bound    pvc-23e5dab6-cebf-46be-94f2-f763c9bfcfbf   200Mi      RWO            longhorn       <unset>                 177m
  1. Generate some postgres data

Connect to postgres via a local proxy and then use an utility on your local machine to connect to localhost:5432, such as HeidiSQL.

k port-forward -n postgres svc/my-postgresql 5432:5432

Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432

Bonus: If the machine on which you are running kubectl port-exec is not your local machine but you are connect via ssh to it, you can forward the port again to your local machine, via ssh:

# ssh -L [LOCAL_PORT]:127.0.0.1:[REMOTE_PORT] -N [REMOTE_USER]@[REMOTE_HOST]
ssh -L 8080:127.0.0.1:5432 -N 192.168.1.103
# connect locally to 127.0.0.1:8080 to postgres in the kubernetes cluster

Then, insert some dummy data.

  1. Create the backup
velero backup create postgres0 --include-namespaces=postgres --default-volumes-to-fs-backup

velero backup describe postgres0 --details
...
velero get backup

NAME              STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
postgres0         Completed   0        0          2025-01-08 11:48:46 +0100 CET   28d       default            <none>

Check using the Oracle Cloud portal if the bucket contains the backup data.

  1. Delete the namespace
kubectl scale sts -n postgres my-postgresql --replicas=0
kubectl delete namespace postgres
  1. Restore the backup
velero restore create --from-backup postgres0
velero restore describe postgres0-20250109101704 --details