Velero

Overview

The source code and default configuration of the Building Block is available in our code.sysEleven.de. Infos on release notes and new features: Release notes Velero

Velero

Velero is a commonly used backup and restore tool to backup your Kubernetes resources. You can also use Velero to migrate data from one Metakube Cluster to another. The latter is
a way to easily replicate your existing cluster.

It can also be used for disaster recovery. This can be done using the Snapshot-Controller.

In case you need a backup tool for your application, Velero can store your application
data on an AWS S3 bucket or Azure Blob Store. There are more providers to use together with your Velero building block.

Prerequisites on Velero

You need to provide a storage provider for this Building Block. Additionally, if you want to use Velero for Snapshots, you need to install a Snapshots-controller. Proceed with the following prerequisite description to use the Velero Building Block out of the box.
If you have not already created a S3 Bucket please refer to how to create an Object Storage.

Your Pipeline config should provide the following variables to properly use a S3 Bucket.
Go to code.syseleven -> your project where your Building Block pipeline resides -> Settings -> CI/CD "Variables"
Variables

The following Command Line Tools might be helpful (git and kubectl should be already installed)

  • s3cmd
  • openstack cli
  • helm

To check the correct variable to enable the pipeline to access the S3 Bucket, you can just simply invoke

openstack ec2 credentials list

and take this values for the S3_BACKUP_ACCESS_KEY and S3_BACKUP_SECRET_KEY in your gitlab project under Settings -> CI/CD "Variables"

A recommended resource overview is listed in the table below.

CPU / vCPU Memory
1 1000MiB + 512-1024MiB per worker node

No further activities need to be carried out in advance.

Installing a Snapshot-Controller

If you need not only a full Backup but also a Snapshot of your data e.g. for disaster recovery purposes, you can use the external-snapshotter.
Velero uses the Custom Resource Definition, in short CRD, of snapshot.storage.k8s.io. To correctly manage this endpoint, Kubernetes needs a controller.

To install it minimally, use the kubeconfigof your Cluster and execute the following commands:

git clone https://github.com/kubernetes-csi/external-snapshotter.git
cd external-snapshotter
kubectl kustomize client/config/crd | kubectl create -f -
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -

Also, you need to create a SnapshotClass. The following template can be used.

#snapshotClass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: default
  labels:
    velero.io/csi-volumesnapshot-class: "true"
driver: cinder.csi.openstack.org
deletionPolicy: Retain

To deploy just invoke the following command.

`kubectl apply -f snapshotClass.yaml`

Adding the Building Block

We assume that you are familiar how to set up your control repo. If you would like to learn more about control repos, please refer to Setup a control repo
Make sure that you also have the KUBECONFIG from your Metakube Cluster set appropriately.

If you have already a control repo, and you would like to add the Velero Building Block just proceed with the following steps.
In the following you would need to change the gitlab URL to fit to your situation.

## this is just an example. This repo below might not be available.
git clone git@code.syseleven.de:/mka-realm-test/control-repo-testing/mkatutorials
cd mkatutorials
mkdir syseleven-velero
cd syseleven-velero

In this case we have a root .gitlab-ci.yml and for each Building Block a gitlab-ci.yml, which is responsible for the Building Block itself.
e.g.

Add a .gitlab-ci.yml to the directory syseleven-velero with the following content:


include:
- project: syseleven/building-blocks/helmfiles/velero
  file: JobDevelopment.yaml
  ref: 5.8.0
- project: syseleven/building-blocks/helmfiles/velero
  file: JobStaging.yaml
  ref: 5.8.0
- project: syseleven/building-blocks/helmfiles/velero
  file: JobProduction.yaml
  ref: 5.8.0

Your direcctory should look like this:

.
+-- .git
+-- .gitlab-ci.yml
+-- syseleven-velero
|   +-- .gitlab-ci.yml

Push your changes to your control repository.

Required configuration

Since version 4 it is mandatory to configure the S3 region in values-velero.yaml to be reflected in

  • configuration.backupStorageLocation.config.region (e.g. fes)
  • configuration.backupStorageLocation.config.s3Url (e.g. https://s3.fes.cloud.syseleven.net)

You can stick to the configuration description on Velero Building Block.
Stick to the best practice section in the README.

Enable Snapshots

If you followed this description, you have already fulfilled and enriched your Metakube Cluster with the Snapshot-controller.

You need to create a configuration file with the correct naming convention of MetaKube Accelerator.
The global configuration file is named values-velero.yaml. Add the necessary configuration:

#values-velero.yaml

snapshotsEnabled: true
configuration:
  volumeSnapshotLocation:
    name: default
    bucket: BUCKET_NAME
    config:
      region: cbk
  features: EnableCSI

The BUCKET_NAME follows the convention of velero-backup-ENVIRONMENT-CLUSTERID where ENVIRONMENT is the used pipeline context - development, stagingor production - and the ClusterID is the given ClusterID during creation of a Cluster.
e.g.for your development cluster it might look like velero-backup-development-xxc2hw9g9k.
You can either use helm to find out the correct bucketName after deployment with

helm get values velero -n syseleven-velero -o json | jq -r '.configuration.backupStorageLocation.bucket'
#output e.g
#velero-backup-development-xxc2hw9g9k

or reconstruct the name by looking into syseleven-velero/.gitlab-ci.yml for the used template and also use kubectl for the ID.
An example output would be:

kubectl get nodes -o jsonpath='{.items[0].metadata.labels.system/cluster}{"\n"}' #newline for pprint
#output e.g
#xxc2hw9g9k

Monitoring

The monitoring of this service is set by the property settings metrics.serviceMonitor.*..

You can get more information in the section SysEleven Best Practices default values of Velero Building Block README.md.
Alerting can be setup by the prometheus rules extension.

Limitations

As the Velero Building Block is one of our curated Building Block you will not suffer from the following limitation.

  • keep an eye on the same version on k8s, Velero and Helm.
  • the bucket comes with the building block regional provisioning of the bucket is guaranteed.
  • no planing on different provider credential, you only have ours.

as we take care on the above-mentioned limitation you can focus on the backup strategy itself.

Scaling Setup

Scaling makes not that much sense for this Building Block and will not cover or reduce aspects on consistency.

Release-Notes

Please find more infos on release notes and new features Release notes Velero