[kubernetes] Deploy RabbitMq on kubernetes cluster

juniordevops.pl 5 lat temu

Hello, today I will show you how to run in your local test environment rabbitmq, of course in kubernetes cluster.

First of all, we have to install a tool that is call helm. What is helm? Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands.

The easiest way to do this is to use a prepared script and run it inside the test VM.

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh chmod 700 get_helm.sh ./get_helm.sh helm init

That will install helm client. Now we need to run the command:

helm init

Command will install tiller inside kubernetes. Tiller is a mechanism to interact with k8s.
Next we use a ready chart with RabbitMq. This is the repository to see what files are inside.
https://github.com/helm/charts/tree/master/stable/rabbitmq/templates

Before we use helm, we need to deploy StorageClass:

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer

The YAML file manifest sees that we will be using the local HDD path. In feature, we can connect the external space where we can save our data. Apply file using:

kubectl apply -f storage-local.yml --namespace=citizen-budget

You see that I provide additional flag –namespace. It is a logical place for a better organization of the project.
Next step:

helm install \ --set persistence.size=1Gi \ --set persistence.storageClass=local-storage \ --set rabbitmq.clustering.k8s_domain=develop-server \ --name rabbitmqhelm --namespace=citizen-budget stable/rabbitmq

I set up a couple of settings as you can see.

If you now look into k8s dashboard you see that PCV (persistent volume claims) from helm deployment are in a pending state. We should provide PV to combine PCV with PVC.

PersistentVolume.yml

apiVersion: v1 kind: PersistentVolume metadata: name: rabbitmq.persistent.volume spec: capacity: storage: 1Gi # volumeMode field requires BlockVolume Alpha feature gate to be enabled. volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - develop-server

Run:

kubectl apply -f PersistentVolume.yml --namespace=citizen-budget

Go to k8s dashboard to section Persistent Volume Claims end edit helm PCV. You have to find line „spec”: and some where there add „volumeName”: „rabbitmq.persistent.volume”,. Peace of this file schould look like this:

"spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "1Gi" } }, "volumeName": "rabbitmq.persistent.volume", "storageClassName": "local-storage", "volumeMode": "Filesystem" }

Helm PVC status should change to Bound. This is almost done
When you enter rabbitMq pod, you will probably see an error in the log:

ERROR: epmd error for host rabbitmqhelm-0.rabbitmqhelm-headless.citizen-budget.svc.cluster.local: nxdomain (non-existing domain)
That’s mean that internal DNS cannot find IP using the name. Edit rabbitmqhelm stateful manifest, finding and replaces line like this:

"name": "K8S_ADDRESS_TYPE", "value": "ip" }, { "name": "CLUSTER_FORMATION.K8S.HOST", "value": "10.152.183.1" }, { "name": "CLUSTER_FORMATION.K8S.PORT", "value": "16443" }, { "name": "RABBITMQ_NODENAME", "value": "rabbit@$(MY_POD_NAME)" }, { "name": "K8S_HOSTNAME_SUFFIX", "value": ".$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).default.svc.cluster.local" }

The CLUSTER_FORMATION.K8S.HOST and CLUSTER_FORMATION.K8S.PORT you will find using command:

kubectl describe service kubernetes

That gives you result:

Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.152.183.1 Port: https 443/TCP TargetPort: 16443/TCP Endpoints: 172.18.6.110:16443 Session Affinity: None Events: <none>

The last step is to edit configMap from rabbitmq, and switch to the same IP/port as above.

##username and password default_user=user default_pass=CHANGEME ## Clustering cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.k8s.host = 172.18.6.110 cluster_formation.k8s.port = 16443 cluster_formation.node_cleanup.interval = 10 cluster_formation.node_cleanup.only_log_warning = true cluster_partition_handling = autoheal # queue master locator queue_master_locator=min-masters # enable guest user loopback_users.guest = false #disk_free_limit.absolute = 50MB #management.load_definitions = /app/load_definition.json

I assume that everything will work correctly.

Idź do oryginalnego materiału