Kubernetes Pods in Pending State

+2 votes

I just followed everything as mentioned in the below link

https://www.edureka.co/blog/install-kubernetes-on-ubuntu

Not sure what went wrong my nodes are in pending state only.

NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE   IP              NODE      NOMINATED NODE
kube-system   calico-kube-controllers-6b48bc8d68-7mj7r   0/1     Pending    0          70m   <none>          <none>    <none>
kube-system   coredns-576cbf47c7-6tjh5                   0/1     Pending    0          70m   <none>          <none>    <none>
kube-system   coredns-576cbf47c7-9khk2                   0/1     Pending    0          70m   <none>          <none>    <none>
kube-system   etcd-kmaster                               0/1     Pending    0          1s    <none>          kmaster   <none>
kube-system   kube-apiserver-kmaster                     0/1     Pending    0          1s    <none>          kmaster   <none>
kube-system   kube-controller-manager-kmaster            0/1     Pending    0          1s    <none>          kmaster   <none>
kube-system   kube-proxy-qgw78                           1/1     NodeLost   1          75m   172.19.19.176   kmaster   <none>
kube-system   kube-scheduler-kmaster                     0/1     Pending    0          1s    <none>          kmaster   <none>
kube-system   kubernetes-dashboard-77fd78f978-zd5xq      0/1     Pending    0          67m   <none>          <none>    <none>

I have given enough resources as well

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Tue, 30 Oct 2018 17:32:35 +0530   Tue, 30 Oct 2018 16:22:09 +0530   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Tue, 30 Oct 2018 17:32:35 +0530   Tue, 30 Oct 2018 16:22:09 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 30 Oct 2018 17:32:35 +0530   Tue, 30 Oct 2018 16:22:09 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 30 Oct 2018 17:32:35 +0530   Tue, 30 Oct 2018 16:22:09 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Tue, 30 Oct 2018 17:32:35 +0530   Tue, 30 Oct 2018 16:22:09 +0530   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized


Regards,

Shyam

Oct 30, 2018 in Kubernetes by Shyam
• 180 points

recategorized Oct 30, 2018 by Vardhan 6,108 views
Heyy @Shyam, have your nodes joined the cluster?
Hi ,

I am also facing the same issue.

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-694687c474-6lqzq   0/1     Pending   0          17m   <none>           <none>    <none>           <none>
kube-system   coredns-86c58d9df4-9ffb8                   0/1     Pending   0          28m   <none>           <none>    <none>           <none>
kube-system   coredns-86c58d9df4-h7v4l                   0/1     Pending   0          23m   <none>           <none>    <none>           <none>
kube-system   etcd-kmaster                               1/1     Running   0          27m   10.223.126.202   kmaster   <none>           <none>
kube-system   kube-apiserver-kmaster                     1/1     Running   0          27m   10.223.126.202   kmaster   <none>           <none>
kube-system   kube-controller-manager-kmaster            1/1     Running   0          28m   10.223.126.202   kmaster   <none>           <none>
kube-system   kube-proxy-qmxdz                           1/1     Running   0          28m   10.223.126.202   kmaster   <none>           <none>
kube-system   kube-scheduler-kmaster                     1/1     Running   0          28m   10.223.126.202   kmaster   <none>           <none>
 

Please share if any one has a resolution.

Hey @Ravikiran, Check if you've given enough resources. Also describe those pending pods and show me the output. 

Use the following command for describing the pods:

 kubectl describe pod <pod-name>

2 answers to this question.

0 votes

No not yet. I am in the process of making the master node up and running, and, then will add the node to that.

Level up your Kubernetes expertise with our industry-leading Kubernetes Course.

answered Oct 31, 2018 by Shyam
• 180 points
Which network plugin are you using?

As per the document I am using Calico Network plugin. 

https://www.edureka.co/blog/install-kubernetes-on-ubuntu

+2 votes

Hey @Shyam, you get this error because no CNI network has been defined in /etc/cni/net.d and you're apparently using the CNI network plugin. If you're following the blog then calico is used there and make sure you've executed the following command:

$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
answered Oct 31, 2018 by Kalgi
• 52,360 points
I agree.. When I say that I followed the documents means the command which you have given is already part of that document and I have executed the same over there .

Alright, try removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. 

The provided ARG is not there in my file. Please find below

root@node:/home/ubuntu# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment=”cgroup-driver=systemd/cgroup-driver=cgroupfs”
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Try using flannel or weave instead of calico. Hopefully that works. Give me sometime, i'll try the same and let you know.
Thank you. Will try with flannel. Do you have a command to that ?

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

and the execute this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Looks like i need to do a fresh install. Because the ports are already in use and getting some errors.

Yup. You need to start it all from scratch. Reset kubeadm and perform all the stpes again.

$ sudo kubeadm reset
This looks something better than before :-) But will take much time to create containers ?

ubuntu@node:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                    READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE
kube-system   coredns-576cbf47c7-6xm4z                0/1     ContainerCreating   0          8m52s   <none>          node   <none>
kube-system   coredns-576cbf47c7-mwvgj                0/1     ContainerCreating   0          8m52s   <none>          node   <none>
kube-system   etcd-node                               1/1     Running             0          8m1s    172.19.19.177   node   <none>
kube-system   kube-apiserver-node                     1/1     Running             0          8m1s    172.19.19.177   node   <none>
kube-system   kube-controller-manager-node            1/1     Running             0          8m19s   172.19.19.177   node   <none>
kube-system   kube-flannel-ds-amd64-llhpw             1/1     Running             0          7m40s   172.19.19.177   node   <none>
kube-system   kube-proxy-lsvf4                        1/1     Running             0          8m52s   172.19.19.177   node   <none>
kube-system   kube-scheduler-node                     1/1     Running             0          8m7s    172.19.19.177   node   <none>
kube-system   kubernetes-dashboard-77fd78f978-jtlxz   0/1     ContainerCreating   0          7m12s   <none>          node   <none>
Those two pods still at containerCreating or did they get created?
Current status

NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE   IP              NODE   NOMINATED NODE
kube-system   coredns-576cbf47c7-6xm4z                1/1     Running            0          46m   10.244.0.162    node   <none>
kube-system   coredns-576cbf47c7-mwvgj                1/1     Running            0          46m   10.244.0.163    node   <none>
kube-system   etcd-node                               1/1     Running            1          45m   172.19.19.177   node   <none>
kube-system   kube-apiserver-node                     1/1     Running            1          45m   172.19.19.177   node   <none>
kube-system   kube-controller-manager-node            1/1     Running            1          45m   172.19.19.177   node   <none>
kube-system   kube-flannel-ds-amd64-llhpw             1/1     Running            1          45m   172.19.19.177   node   <none>
kube-system   kube-proxy-lsvf4                        1/1     Running            1          46m   172.19.19.177   node   <none>
kube-system   kube-scheduler-node                     1/1     Running            1          45m   172.19.19.177   node   <none>
kube-system   kubernetes-dashboard-77fd78f978-jtlxz   0/1     ImagePullBackOff   0          44m   <none>          node   <none>
ubuntu@node:~$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
ubuntu@node:~$ sudo su

Did you run the following command as a normal user?

mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Yes I did using normal user only

Can you try executing these commands again as a non user and let me know if it still doesn't work.

mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Do you mean non-root user ?
Yup. Execute it as a non-root.
NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE   IP              NODE   NOMINATED NODE
kube-system   coredns-576cbf47c7-6xm4z                1/1     Running            0          76m   10.244.0.162    node   <none>
kube-system   coredns-576cbf47c7-mwvgj                1/1     Running            0          76m   10.244.0.163    node   <none>
kube-system   etcd-node                               1/1     Running            1          75m   172.19.19.177   node   <none>
kube-system   kube-apiserver-node                     1/1     Running            1          75m   172.19.19.177   node   <none>
kube-system   kube-controller-manager-node            1/1     Running            1          76m   172.19.19.177   node   <none>
kube-system   kube-flannel-ds-amd64-llhpw             1/1     Running            1          75m   172.19.19.177   node   <none>
kube-system   kube-proxy-lsvf4                        1/1     Running            1          76m   172.19.19.177   node   <none>
kube-system   kube-scheduler-node                     1/1     Running            1          75m   172.19.19.177   node   <none>
kube-system   kubernetes-dashboard-77fd78f978-jtlxz   0/1     ImagePullBackOff   0          74m   10.244.0.38     node   <none>
NAMESPACE     NAME                                    READY   STATUS         RESTARTS   AGE   IP              NODE   NOMINATED NODE
kube-system   coredns-576cbf47c7-6xm4z                1/1     Running        0          76m   10.244.0.162    node   <none>
kube-system   coredns-576cbf47c7-mwvgj                1/1     Running        0          76m   10.244.0.163    node   <none>
kube-system   etcd-node                               1/1     Running        1          75m   172.19.19.177   node   <none>
kube-system   kube-apiserver-node                     1/1     Running        1          75m   172.19.19.177   node   <none>
kube-system   kube-controller-manager-node            1/1     Running        1          75m   172.19.19.177   node   <none>
kube-system   kube-flannel-ds-amd64-llhpw             1/1     Running        1          74m   172.19.19.177   node   <none>
kube-system   kube-proxy-lsvf4                        1/1     Running        1          76m   172.19.19.177   node   <none>
kube-system   kube-scheduler-node                     1/1     Running        1          75m   172.19.19.177   node   <none>
kube-system   kubernetes-dashboard-77fd78f978-jtlxz   0/1     ErrImagePull   0          74m   10.244.0.38     node   <none>
I have done a reset again and trying to do that... but now i am getting different error

ubuntu@node:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE
kube-system   coredns-576cbf47c7-44sln       0/1     Pending   0          3m27s   <none>          <none>   <none>
kube-system   coredns-576cbf47c7-8z6b2       0/1     Pending   0          3m27s   <none>          <none>   <none>
kube-system   etcd-node                      1/1     Running   0          2m33s   172.19.19.177   node     <none>
kube-system   kube-apiserver-node            1/1     Running   0          2m51s   172.19.19.177   node     <none>
kube-system   kube-controller-manager-node   1/1     Running   0          2m54s   172.19.19.177   node     <none>
kube-system   kube-proxy-kps2h               1/1     Running   0          3m27s   172.19.19.177   node     <none>
kube-system   kube-scheduler-node            1/1     Running   0          2m38s   172.19.19.177   node     <none>
ubuntu@node:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
ubuntu@node:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
ubuntu@node:~$ sudo su
root@node:/home/ubuntu# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
Hi @Shyam, did all your pods start running? If they've started running then execute the join command and join the nodes to the cluster and then execute the dashboard command.
No Hannah. They didn't started. Presently I am getting error as below

ubuntu@node:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?

The error is same even i am root or non root user :-(

current status is :

NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE
kube-system   coredns-576cbf47c7-4rgfz       0/1     Pending   0          43m   <none>          <none>   <none>
kube-system   coredns-576cbf47c7-8ck77       0/1     Pending   0          43m   <none>          <none>   <none>
kube-system   etcd-node                      1/1     Running   0          43m   172.19.19.177   node     <none>
kube-system   kube-apiserver-node            1/1     Running   0          43m   172.19.19.177   node     <none>
kube-system   kube-controller-manager-node   1/1     Running   0          43m   172.19.19.177   node     <none>
kube-system   kube-proxy-bz8sd               1/1     Running   0          43m   172.19.19.177   node     <none>
kube-system   kube-scheduler-node            1/1     Running   0          43m   172.19.19.177   node     <none>

I just tried creating a kube cluster using the same blog and it worked fine for me. The reason for the following error is there might be no admin.conf file and you might not have KUBECONFIG=/root/admin.conf set.

The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
reset your cluster on your master as well all the nodes with the following commad

sudo su
kubeadm reset

on master as well as nodes and create a cluster again. I'm using flannel and its working so just stick to flnnel.

Now execute the kubectl init command only on master

sudo su
kubeadm init --apiserver-advertise-address=<your_ip_addr> --pod-network-cidr=10.244.0.0/16

You'll see something like this

execute the following commands:
$ exit
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Execute the join command on your nodes.

Excellent Support Team ! My Cluster is up and running ....

ubuntu@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE
kube-system   coredns-576cbf47c7-45kvm                1/1     Running   0          13m     10.244.0.2      kmaster   <none>
kube-system   coredns-576cbf47c7-jw7cf                1/1     Running   0          13m     10.244.0.5      kmaster   <none>
kube-system   etcd-kmaster                            1/1     Running   1          12m     172.19.19.184   kmaster   <none>
kube-system   kube-apiserver-kmaster                  1/1     Running   1          12m     172.19.19.184   kmaster   <none>
kube-system   kube-controller-manager-kmaster         1/1     Running   1          12m     172.19.19.184   kmaster   <none>
kube-system   kube-flannel-ds-amd64-999zt             1/1     Running   0          12m     172.19.19.184   kmaster   <none>
kube-system   kube-proxy-m4rfn                        1/1     Running   1          13m     172.19.19.184   kmaster   <none>
kube-system   kube-scheduler-kmaster                  1/1     Running   1          12m     172.19.19.184   kmaster   <none>
kube-system   kubernetes-dashboard-77fd78f978-n8glh   1/1     Running   0          7m46s   10.244.0.3      kmaster   <none>
Thank you so much ! I am trying to do that now. will get back to you.
I'm glad we could help :)
You're welcome, really glad to help :)
Team ! One more query...

I would like to access this Kubernetes Dashboard from public network... How can i do that ?

Everytime accessing through VM console is not ok and i can't show the full screen of the dashboard.

So is there anyway that it works if i link public IP to that ?

So @Shyam, are you trying to access the dashboard from outside the cluster(but same IP address) or using a different IP address?

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

I am using the above command but just replacing the localhost with the ip of that server.

Do I need to add one public IP for that then access ?

The dashboard is working when i access it from the console of the master VM. But I dont want to use like that.
Unfortunately, you can only access it only from within the node if you're using a VM.

Maybe if you've hosted it in the Cloud , then you can use the IP and port number.
Thanks @shyam

I had the same issue and your steps worked for me also...

Related Questions In Kubernetes

0 votes
1 answer
0 votes
5 answers

Kubernets cluster pod stays in pending state

I had the same issue. I spent ...READ MORE

answered May 3, 2019 in Kubernetes by Kashish
20,457 views
0 votes
1 answer
+4 votes
1 answer

Installing Web UI (Dashboard):kubernetes-dashboard on main Ubuntu 16.04.6 LTS (Xenial Xerus) server

Follow these steps: $ kubeadm reset $ kubeadm init ...READ MORE

answered Apr 12, 2019 in Kubernetes by Kalgi
• 52,360 points

reshown Apr 12, 2019 by Kalgi 5,976 views
+1 vote
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17, 2019 in Others by anonymous
14,523 views
0 votes
2 answers

Restart pods when configmap updates in Kubernetes?

Use Deployments, and consider your ConfigMaps to ...READ MORE

answered Aug 31, 2018 in Kubernetes by Nilesh
• 7,050 points
7,324 views
0 votes
1 answer

Create kubernetes that manages pods in java

Fabric8's Kubernetes Client is using a generated ...READ MORE

answered Sep 5, 2018 in Kubernetes by Kalgi
• 52,360 points
379 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP