Created cluster on AWS but master node showing not ready

0 votes

Created cluster on AWS but master node showing not ready, Could you please suggest on this .

root@kubemaster:~# kubectl get nodes
ip-172-31-11-68 NotReady master 12h v1.12.2
ip-172-31-4-65 Ready 11h v1.12.2
Dec 20, 2018 in Kubernetes by Ali
• 10,410 points

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

This error usually occurs when your cluster hasn't been created. Check the following things and let me know

  • Check if your cluster is created
  • check if your node has joined the cluster
  • and if your cluster is created successfully, check if all your pods are running.
answered Dec 20, 2018 by Eric
Hey @Eric , thanks man, my cluster hadn't been created. i ran the kubectl init command again and joined the node to the cluster. Now it seems fine.

Related Questions In Kubernetes

0 votes
1 answer

DeamonSet pod not scheduling on Master Node

Ever since kubernetes 1.6, DaemonSets are not ...READ MORE

answered Sep 14, 2018 in Kubernetes by DareDev
• 6,710 points
0 votes
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17 in Others by anonymous
+5 votes
7 answers

Docker swarm vs kubernetes

Swarm is easy handling while kn8 is ...READ MORE

answered Aug 27, 2018 in Docker by Mahesh Ajmeria
0 votes
2 answers

NoSuchBucket error when running Kubernetes on AWS

It was a bug on their part. ...READ MORE

answered Sep 4, 2018 in Kubernetes by Nilesh
• 6,900 points

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.