Running Kubernetes in multimaster mode

0 votes

I have set a kubernetes (version 1.6.1) cluster with three servers in control plane. Apiserver is running with the following config:

/usr/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
  --advertise-address=x.x.x.x \
  --allow-privileged=true \
  --audit-log-path=/var/lib/k8saudit.log \
  --authorization-mode=ABAC \
  --authorization-policy-file=/var/lib/kubernetes/authorization-policy.jsonl \
  --bind-address=0.0.0.0 \
  --etcd-servers=https://kube1:2379,https://kube2:2379,https://kube3:2379 \
  --etcd-cafile=/etc/etcd/ca.pem \
  --event-ttl=1h \
  --insecure-bind-address=0.0.0.0 \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --kubelet-https=true \
  --service-account-key-file=/var/lib/kubernetes/ca-key.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --token-auth-file=/var/lib/kubernetes/token.csv \
  --v=2 \
  --apiserver-count=3 \
  --storage-backend=etcd2

Now I am running kubelet with following config:

/usr/bin/kubelet \
  --api-servers=https://kube1:6443,https://kube2:6443,https://kube3:6443 \
  --allow-privileged=true \
  --cluster-dns=10.32.0.10 \
  --cluster-domain=cluster.local \
  --container-runtime=docker \
  --network-plugin=kubenet \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --serialize-image-pulls=false \
  --register-node=true \
  --cert-dir=/var/lib/kubelet \
  --tls-cert-file=/var/lib/kubernetes/kubelet.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubelet-key.pem \
  --hostname-override=node1 \
  --v=2

This works great as long as kube1 is running. If I take kube1 down, the node does not communicate with kube2 or kube3. 

Sep 6, 2018 in Kubernetes by lina
• 8,160 points
123 views

2 answers to this question.

0 votes
I think using load balancer is the way to go about this.

I used nginx for this reason and it worked like a charm for me.

Try it out and let me know if it works.
answered Sep 6, 2018 by Kalgi
• 45,780 points
0 votes
I had the same issue and using nginx as loadbalancer saved me
answered Sep 6, 2018 by Nilesh
• 6,920 points

Related Questions In Kubernetes

0 votes
1 answer

Running A cronjob in a pod in Kubernetes

Unfortunately, you cannot run the CronJob inside a container ...READ MORE

answered Sep 17, 2018 in Kubernetes by Kalgi
• 45,780 points
364 views
0 votes
1 answer

Pod not running as expected in kubernetes

Heyy @Hannah, It may be that there ...READ MORE

answered Oct 26, 2018 in Kubernetes by Kalgi
• 45,780 points
76 views
+1 vote
1 answer
+5 votes
2 answers

Redirecting host to service path in kubernetes

What you are trying to do is ...READ MORE

answered Mar 27, 2018 in Kubernetes by DragonLord999
• 8,380 points
130 views
0 votes
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17 in Others by anonymous
3,369 views
+3 votes
1 answer
0 votes
1 answer
0 votes
1 answer

Running a cronjob in kubernetes

You will find the CronJobresource in the batch/v1beta1 API group. ...READ MORE

answered Sep 6, 2018 in Kubernetes by Kalgi
• 45,780 points
42 views