Fun with PKS, K8s, MongoDB Helm Charts and vSAN

I’ve been spending a bit of time lately look at our Heptio Velero product, and how it works with various cloud native applications. Next application on my list is MongoDB, another NoSQL database. I looked at various deployment mechanisms for MongoDB, and it seems that using Helm Charts is the most popular approach. This led me to the Bitnami MongoDB Stack Chart GitHub Repo. At this point, I did spin my wheels a little trying to get MongoDB stood up. In this post, I’ll talk through some of the gotchas I encountered. Once again, my environment is vSphere 6.7 and vSAN 6.7. I am using the Pivotal Container Service/PKS 1.3 which has the vSphere Cloud Driver for Kubernetes, and I already have a 4 nodes Kubernetes cluster running v1.12.4. Helm is also installed and initialized. In a further post, I’ll look at Velero backup/restore of this MongoDB deployment.

 

OK – so the first thing you need to do is to download values-production.yaml file. This has got all the configuration information that you will need when trying to deploy the MongoDB helm chart. I spent a lot of time adding set options to the command line of my helm install, when I should have simply been referencing this file with my own tuned values. However, I think it is good to show you the sorts of issues I encountered and also what I went through to configure the MongoDB helm chart in order to get a working MongoDB environment. Note that before doing anything, you will need a StorageClass for the Persistent Volumes that we will use for this deployment. Here is my tried and trusted StorageClass YAML file which is referenced when dynamic Persistent Volumes are required – it placed the PVs on my vSANDatastore (as VMDKs) with a vSAN policy of gold.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: mongo-sc
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin
    storagePolicyName: gold
    datastore: vsanDatastore

A simple ‘kubectl create‘ of my StorageClass YAML file, and we are good to proceed. One other thing I wanted to do is to place MongoDB into its own namespace. Thus a very quick ‘kubectl create ns mongodb‘ and I was now ready to proceed with the helm chart deployments. If you are new to helm and tiller, this is a good place to get started.

 

Attempt 1 – no values-production. yaml file

Here was the helm command that I used. Don’t use this unless you really want to see the issues I encountered. The proper command will appear further on.

$ helm install --name mymongodb --namespace mongodb --set service.type=NodePort\
 --set persistence.storageClass=mongo-sc --set replicaSet.enabled=true\
 stable/mongodb

Helm displays a bunch of deployment status after the initial command is run. You can also get updated status by running helm status “name”, as follows:

$ helm status mymongodb
LAST DEPLOYED: Thu Mar 28 11:45:17 2019
NAMESPACE: mongodb
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mymongodb-arbiter-0 0/1 Running 0 94s
mymongodb-primary-0 0/1 Running 0 94s
mymongodb-secondary-0 0/1 Running 0 94s

==> v1/Secret
NAME TYPE DATA AGE
mymongodb Opaque 2 94s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mymongodb NodePort 10.100.200.76 <none> 27017:30184/TCP 94s
mymongodb-headless ClusterIP None <none> 27017/TCP 94s

==> v1/StatefulSet
NAME READY AGE
mymongodb-arbiter 0/1 94s
mymongodb-primary 0/1 94s
mymongodb-secondary 0/1 94s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
mymongodb-arbiter 1 N/A 0 94s
mymongodb-primary 1 N/A 0 94s
mymongodb-secondary 1 N/A 0 94s

In this case, my pods were never entering a ready state. I decided to run a ‘kubectl describe‘ on the pods to see if I could get any further clues. Here is a snippet of some of the describe output, which also displays pod events at the end.

Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6

    Mounts:
      /bitnami/mongodb from datadir (rw)

Warning Unhealthy 2m35s kubelet, 91920344-b3a7-4979-a100-c156db235b6d Readiness probe failed: MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-03-28T11:45:45.346+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:343:13
@(connect):1:6
exception: connect failed

So obviously some problems with trying to connect to the database, as well as the liveness and readiness checks failing (presumably because they are trying to connect to the DB). Next I decided to login into one of the pods and see if I could get any more information by running the mongo client and mongod daemon.

$ kubectl get po -n mongodb
NAME READY STATUS RESTARTS AGE
mymongodb-arbiter-0 0/1 Running 4 9m58s
mymongodb-primary-0 0/1 Running 4 9m58s
mymongodb-secondary-0 0/1 Running 4 9m58s

$ kubectl exec -it mymongodb-primary-0 -n mongodb -- bash

I have no name!@mymongodb-primary-0:/$ mongo
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-03-28T11:56:14.087+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:343:13
@(connect):1:6
exception: connect failed

I have no name!@mymongodb-primary-0:/$ mongod
2019-03-28T11:56:16.971+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] MongoDB starting : pid=141 port=27017 dbpath=/data/db 64-bit host=mymongodb-primary-0
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] db version v4.0.6
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] git version: caa42a1f75a56c7643d0b68d3880444375ec42e3
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] modules: none
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] build environment:
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] distmod: debian92
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] distarch: x86_64
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-03-28T11:56:16.975+0000 I CONTROL [initandlisten] options: {}
2019-03-28T11:56:16.979+0000 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2019-03-28T11:56:16.979+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2019-03-28T11:56:16.979+0000 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock
2019-03-28T11:56:16.979+0000 I CONTROL [initandlisten] now exiting
2019-03-28T11:56:16.979+0000 I CONTROL [initandlisten] shutting down with code:100
I have no name!@mymongodb-primary-0:/$

I have no name!@mymongodb-primary-0:/$ cd /opt/bitnami/mongodb/
I have no name!@mymongodb-primary-0:/opt/bitnami/mongodb$ ls
LICENSE-Community.txt MPL-2 README THIRD-PARTY-NOTICES bin conf data licenses logs tmp
I have no name!@mymongodb-primary-0:/opt/bitnami/mongodb$ ls data/
db
I have no name!@mymongodb-primary-0:/opt/bitnami/mongodb$

It would appear that the biggest issue is that the Data directory /data/db was not found. Now, if I look for the PV mount point, I can see that it is not in the / (root) folder, but it is in /opt/bitnami/mongodb. OK – there seems to be an issue with where MongoDB is expecting to find the data/db folder. Was I missing another –set value on the command line?

 

Attempt 2 – using the values-production. yaml file

It was at this point that my colleague Myles recommended the values-production.yaml file. The appropriate changes were found in the Config Map section that had entries for the MongoDB config file. I removed the comment from the following lines:

from:

# Entries for the MongoDB config file
configmap:
# # Where and how to store data.
# storage:
# dbPath: /opt/bitnami/mongodb/data/db
# journal:
# enabled: true
# #engine:
# #wiredTiger:
# # where to write logging data.
# systemLog:
# destination: file
# logAppend: true
# path: /opt/bitnami/mongodb/logs/mongodb.log

to:

# Entries for the MongoDB config file
configmap:
# Where and how to store data.
storage:
dbPath: /opt/bitnami/mongodb/data/db
journal:
enabled: true
#engine:
#wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /opt/bitnami/mongodb/logs/mongodb.log

With this file saved, I relaunched my helm install command, but this time referencing the values-production.yaml file.

$ helm install --name mymongodb -f values-production.yaml --namespace mongodb --set service.type=NodePort --set persistence.storageClass=mongo-sc --set replicaSet.enabled=true stable/mongodb
$ helm status mymongodb
LAST DEPLOYED: Thu Mar 28 12:22:24 2019
NAMESPACE: mongodb
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
mymongodb 1 55s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mymongodb-arbiter-0 0/1 Running 0 55s
mymongodb-primary-0 2/2 Running 0 55s
mymongodb-secondary-0 2/2 Running 0 55s

==> v1/Secret
NAME TYPE DATA AGE
mymongodb Opaque 2 55s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mymongodb NodePort 10.100.200.199 <none> 27017:30269/TCP 55s
mymongodb-headless ClusterIP None <none> 27017/TCP,9216/TCP 55s

==> v1/StatefulSet
NAME READY AGE
mymongodb-arbiter 0/1 55s
mymongodb-primary 1/1 55s
mymongodb-secondary 1/1 55s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
mymongodb-arbiter 1 N/A 0 55s
mymongodb-primary 1 N/A 0 55s
mymongodb-secondary 1 N/A 0 55s

Wow – this looks better already. Let’s see if I can now run the mongo client on the pods, which I could not do before.

cormac@pks-cli:~/mongodb-helm$ kubectl exec -it mymongodb-primary-0 -n mongodb -- bash
Defaulting container name to mongodb-primary.
Use 'kubectl describe pod/mymongodb-primary-0 -n mongodb' to see all of the containers in this pod.

I have no name!@mymongodb-primary-0:/$ mongo
MongoDB shell version v4.0.7
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("90b2c3b3-a6e6-4d85-ad6d-626efb008246") }
MongoDB server version: 4.0.7
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
2019-03-28T12:24:54.912+0000 I STORAGE [main] In File::open(), ::open for '//.mongorc.js' failed with Unknown error
Server has startup warnings:
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten]
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten]
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten]
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten]
2019-03-28T12:22:36.914+0000 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 15664 processes, 65536 files. Number of processes should be at least 32768 : 0.5 times number of files.
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
>

There are a couple in interesting outputs from the mongo client, even though it appears to be working. First is related to access control. I’m not too concerned about that as this is in my own private lab. However, I do want to be able to use an application like Compass for managing the database from my desktop. Thus I do not want the servers to be bound to localhost. So there is another change needed to be made in the values-production.yaml file. First, delete the current deployment.

$ helm delete --purge mymongodb
release "mymongodb" deleted

Next, make the following changes to the values-production file, from:

# # network interfaces
# net:
# port: 27017
# bindIp: 0.0.0.0
# unixDomainSocket:
# enabled: true
# pathPrefix: /opt/bitnami/mongodb/tmp

to:

# # network interfaces
net:
port: 27017
bindIp: 0.0.0.0
unixDomainSocket:
enabled: true
pathPrefix: /opt/bitnami/mongodb/tmp

And reinstall the helm chart as before. Now if you login to the pod, and run the mongo client command, that message about bound to localhost should be gone.

Finally, we can see if we can manage the database from Compass. In my case, I do not have a LB front-end, so I simply connect to the IP address of one of my K8s nodes, along with the port on which my MongoDB is accessible externally. You can get the port in a number of ways, but the simplest way is to look at your services. In the output below, the external port is 30133.

$ kubectl get svc -n mongodb
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)              AGE
mymongodb            NodePort    10.100.200.3   <none>        27017:30133/TCP      2m17s
mymongodb-headless   ClusterIP   None           <none>        27017/TCP,9216/TCP   2m17s

With details about the K8s node and MongoDB port, we can now connect to it via Compass. Since I have not bothered with credentials, none are needed.

 

Excellent – we are up and running. Now, I can drop a number of additional –set options (nodeport, storage class, etc) and make my command line to install the helm chart for MongoDB a lot simpler. You can play with this yourselves. Come back soon as I plan to populate the DB with some useful information, and then go through the process of backing it up and restoring it with Heptio Velero.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.