The source code and default configuration of the Building Block is available in our code.syseleven.de. Infos on release notes and new features Release notes rabbitmq-cluster
This version of the RabbitMQ Building Block is the next iteration without the container image formerly provided by Bitnami. It replaces the legacy RabbitMQ Building Block.
RabbitMQ is a lightweight message broker which can be deployed on Metakube Core.
The source and a default configuration can be found on code.syseleven.de. For infos on release notes and new features please follow Release notes rabbitmq-cluster
A recommended resource overview is listed in the table below.
| CPU/vCPU | Memory |
|---|---|
| 3 | 6000MiB |
A default deployment consists off 3 nodes.
No further activities need to be carried out in advance.
You are good to go with the recommended general cluster configuration to meet the RabbitMQ-Cluster recommended configuration.
An existing control repository is mandatory to proceed. If you would like to learn more about control repos, please refer to Setup a control repo.
Make sure that you also have the KUBECONFIG from your Metakube Cluster set appropriately.
E.g.
git clone git@code.syseleven.de:/<path2yourControlRepo>
cd <YourWorkspace>
mkdir syseleven-rabbitmq-cluster
cd syseleven-rabbitmq-cluster
In your
stages:
- diff
- deploy
include: syseleven-rabbitmq-cluster/.gitlab-ci.yml
In each Building Block directory, the gitlab-ci.yml is responsible for the Building Block itself.
Add a .gitlab-ci.yml to the directory syseleven-rabbitmq-cluster with the following content:
include:
- project: syseleven/building-blocks/helmfiles/rabbitmq-cluster
file: JobDevelopment.yaml
ref:
- project: syseleven/building-blocks/helmfiles/rabbitmq-cluster
file: JobStaging.yaml
ref:
- project: syseleven/building-blocks/helmfiles/rabbitmq-cluster
file: JobProduction.yaml
ref:
Your directory structure should look like this:
.
+-- .git
+-- .gitlab-ci.yml
+-- syseleven-rabbitmq-cluster
| +-- .gitlab-ci.yml
Push your changes to your control repository. Your Building Block will be deployed to your Metakube Cluster. Once the Gitlab pipeline is successful, you can check the running Building Block with the following command:
% kubectl get pods -n syseleven-rabbitmq-cluster
NAME READY STATUS RESTARTS AGE
rabbitmq-cluster-0 1/1 Running 0 1d20h
rabbitmq-cluster-1 1/1 Running 0 1d20h
rabbitmq-cluster-2 1/1 Running 0 1d20h
The RabbitMQ-Cluster is now ready to use. To access the management interface you need to fulfill some prerequisites :
kubectl -n syseleven-rabbitmq-cluster get cm rabbitmq-cluster-config -o jsonpath='{.data.rabbitmq\.conf}' | grep default_pass
Before you enter the admin URL of RabbitMQ-Cluster, just create a port forward as follows upfront.
kubectl port-forward --namespace syseleven-rabbitmq-cluster svc/rabbitmq-cluster 15672:15672
Now you can access the management UI by using 127.0.0.1:15672. Enter the admin password you retrieved from the command above.

In the Overview section of the Management Interface you can export and import the definitions. You can export the current configuration as a Json file or import existing configuration from other RabbitMQ clusters, see: Schema Definition Export and Import.
An overview of possible configuration options are available on RabbitMQ-Cluster BuildingBlock.
RabbitMQ provides a number of CLI tools. All these tools are documented in detail under: Command Line Tools
You can use the CLI tools in any rabbitmq pod.
Example
kubectl exec -n syseleven-rabbitmq-cluster rabbitmq-cluster-0 -- rabbitmqctl --help
list of provided CLI tools
Stick to the official documentation of RabbitMQ to learn more about scaling and further configuration options.
Besides the following known issues please always keep an eye on the official announcements from the upstream project documentation.
IMPORTANT: Some of these procedures can lead to data loss. Always make a backup beforehand!
The RabbitMQ cluster is able to support multiple node failures but, in a situation in which all the nodes are brought down at the same time, the cluster might not be able to self-recover.
This happens if the pod management policy of the StatefulSet is not parallel and the last pod to be running wasn't the first pod of the StatefulSet. If that happens, update the pod management policy to recover a healthy state:
$ kubectl delete statefulset STATEFULSET_NAME --cascade=false
$ helm upgrade RELEASE_NAME groundhog2k/rabbitmq \
--set podManagementPolicy=Parallel \
--set replicaCount=NUMBER_OF_REPLICAS \
--set authentication.password.value=PASSWORD \
--set authentication.erlangCookie.value=ERLANG_COOKIE
If the steps above won't recover the cluster to a healthy state, it could be possible that none of the RabbitMQ nodes are aware of it last state. In those cases, you can force the boot of the nodes by specifying the clustering.forceBoot=true parameter (which will execute rabbitmqctl force_boot in each pod):
$ helm upgrade RELEASE_NAME groundhog2k/rabbitmq \
--set podManagementPolicy=Parallel \
--set clustering.forceBoot=true \
--set replicaCount=NUMBER_OF_REPLICAS \
--set authentication.password.value=PASSWORD \
--set authentication.erlangCookie.value=ERLANG_COOKIE
Please stick to the official RabbitMQ documentation to get more details on that topic.
Please find more infos on release notes and new features Release notes RabbitMQ-Cluster