I’m working on implementing a Kubernetes cluster for my company where we will run our tests and environments of our application.
After spending some time to install a cluster myself (Which I did but with some pain) I received an ad from DigitalOcean about their Kubernetes implementation. After looking at their marketing page, we decided to give it a shot.
Gitlab CI and Kubernetes
Initally I kept my Gitlab CI script to run the tests which was using the docker
command (With the Docker-In-Docker image).
All was working fine, even if it’s not recommended, but I finally got the issue that zalenium wasn’t able to reach my application.
Imagine what is happening : The gitlab runner, which is running in a Pod, runs the build script, which creates containers in the dind Docker image, and zalenium which is running in other Pods, in another namespace (and another Node potentially).
Zalenium couldn’t access the app simply because of all the layers. I would have to open port at many level … something was wrong.
After figuring this out, I realized I have to use kubectl
instead of Docker, so having to create the YAML files to deploy the services and run the required commands.
kubectl to connect to the DigitalOcean Kubernetes cluster
As I mentionned in my post Configure kubectl for DigitalOcean Kubernetes, we are using the DigitalOcean Kubernetes product.
First I changed the Docker image of the test
stage in order to use the roffe/kubectl
Docker image, providing me the kubectl
command line but also the envsubst
command line we’re using to replace variables in the Kubernetes YAML files.
Then comes the kubectl
config in order to connect to the cluster.
I first tried to add a variable from the Gitlab CI settings, with the content of the file from DigitalOcean, but it didn’t worked so I ended using the kubectl config
.
In the gitlab-ci.yml
file I used the following commands in order to configure kubectl
:
1
2
3
4
kubectl config set-cluster one-ci --server=$KUBERNETES_URL
kubectl config set-credentials ci --token=$KUBERNETES_TOKEN
kubectl config set-context default-context --cluster=one-ci --user=ci
kubectl config use-context default-context
But this failed with the error :
1
x509: certificate signed by unknown authority
Some people are using the --insecure-skip-tls-verify=true
which sounds wrong to me.
Ideally you pass the k8s CA to the kubectl config set-cluster
command with the --certificate-authority
flag, but it accepts only a file and I don’t want to have to write the CA to a file just to be able to pass it here.
It’s like a useless step from my point of view.
Finally I found in a Github issue that you can set the certificate authority data with the following command :
1
kubectl config set clusters.<name-of-cluster>.certificate-authority-data $CADATA
Problem is when I tried it, using the value from the ~/.kube/config
file it wasn’t working because of a base64 encoding and I wasn’t able to pass the CA and get it base64’d correctly.
After reading the Kubernetes documentation, I found the --set-raw-bytes
flag which allowed me to control the base64 encoding and disable it :
1
kubectl config set clusters.one-ci.certificate-authority-data $KUBERNETES_CA --set-raw-bytes=false
This solved the issue and made the kubectl
command working fine without any certificate validation error 🎉.