I'm getting coredns CrashLoopBackoff error when installing Kubernetes on Ubuntu VM. I've followed the steps as described in Edureka blog

+4 votes

I had followed the installation process as described in the blog for installing kubernetes on Ubuntu VM.

I am getting the pod status as this... coredn issues.

NAMESPACE     NAME                              READY   STATUS             RESTARTS   AGE
kube-system   coredns-68fb79bcf6-24wr9          0/1     CrashLoopBackOff   7          15m
kube-system   coredns-68fb79bcf6-6gzmt          0/1     CrashLoopBackOff   7          15m
kube-system   etcd-kmaster                      1/1     Running            0          16m
kube-system   kube-apiserver-kmaster            1/1     Running            0          17m
kube-system   kube-controller-manager-kmaster   1/1     Running            0          17m
kube-system   kube-flannel-ds-amd64-q49vc       1/1     Running            0          16m
kube-system   kube-proxy-mfqj7                  1/1     Running            0          17m
kube-system   kube-scheduler-kmaster            1/1     Running            0          16m
Nov 23, 2018 in Kubernetes by Raj
• 160 points

edited Jan 11 by Vardhan 1,120 views
Hey @Raj, are you using Calico or flannel?

Hi Kalgi. I tried with both calico and flannel.

have installed ubuntu 16.04.5 Desktop version in a VM and tried the procedure outlined in the blog.

---------------------

The installation works fine when I install using flannel and also change the proxy to 8.8.8.8 using the command:

     kubectl edit cm coredns -n kube-system

My concern is whether there would be any implication of changing the proxy as mentioned above.

Alright, execute this command and post the output:

kubectl -n kube-system describe pod <coredns-pod-name>
Hey @Raj, any progress with this?
Sorry Kalgi. Thanks for the follow-up.

I got shifted to another project and so couldn't continue with Kubernetes as of now.

Thanks a ton.
Happy to help @Raj, come back whenever you're stuck :)
Name:                 coredns-5c98db65d4-6v659
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 kmaster/172.22.195.99
Start Time:           Thu, 27 Jun 2019 15:32:48 +0530
Labels:               k8s-app=kube-dns
                      pod-template-hash=5c98db65d4
Annotations:          <none>
Status:               Running
IP:                   10.244.0.2
Controlled By:        ReplicaSet/coredns-5c98db65d4
Containers:
  coredns:
    Container ID:  docker://2df7746edaefdb9702d07347bfe25c967c5b1459d59fe02e484de830fdd1dfb4
    Image:         k8s.gcr.io/coredns:1.3.1
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 27 Jun 2019 16:20:27 +0530
      Finished:     Thu, 27 Jun 2019 16:20:28 +0530
    Ready:          False
    Restart Count:  14
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-s7zr4 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-s7zr4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-s7zr4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  52m (x301 over 82m)    default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Pulled            48m (x3 over 49m)      kubelet, kmaster   Container image "k8s.gcr.io/coredns:1.3.1" already present on machine
  Normal   Created           48m (x3 over 49m)      kubelet, kmaster   Created container coredns
  Normal   Started           48m (x3 over 49m)      kubelet, kmaster   Started container coredns
  Warning  DNSConfigForming  24m (x139 over 49m)    kubelet, kmaster   Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.50.50.50 10.50.10.50 2001:4898::1050:1050
  Warning  BackOff           4m23s (x217 over 49m)  kubelet, kmaster   Back-off restarting failed container
Hey @ashutosh, can you please tell the steps you took to get to this stage?
@kalki please answer this issue

2 answers to this question.

0 votes
I have the same issue right now with error on CoreDns... Also dash is running but getting connection refused error.
answered Feb 15 by RANGANATHAN BALAJI
Hey @Balaji, can you show your error log?
Also, is your coreDNS in CrashLoopBackOff state or pending state?
+1 vote
I followed this tutorial on Hyper-V using external switch

https://www.edureka.co/blog/install-kubernetes-on-ubuntu

after kubeadm init command this is the cluster i got.

Any help would be appreciated.
answered Jun 27 by ashutosh
Are you missing something in your post?? Cause I can't see what cluster you got after running the command
@aysha @vardhan @kalki

this is the info

Name:                 coredns-5c98db65d4-6v659
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 kmaster/172.22.195.99
Start Time:           Thu, 27 Jun 2019 15:32:48 +0530
Labels:               k8s-app=kube-dns
                      pod-template-hash=5c98db65d4
Annotations:          <none>
Status:               Running
IP:                   10.244.0.2
Controlled By:        ReplicaSet/coredns-5c98db65d4
Containers:
  coredns:
    Container ID:  docker://2df7746edaefdb9702d07347bfe25c967c5b1459d59fe02e484de830fdd1dfb4
    Image:         k8s.gcr.io/coredns:1.3.1
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 27 Jun 2019 16:20:27 +0530
      Finished:     Thu, 27 Jun 2019 16:20:28 +0530
    Ready:          False
    Restart Count:  14
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-s7zr4 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-s7zr4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-s7zr4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  52m (x301 over 82m)    default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Pulled            48m (x3 over 49m)      kubelet, kmaster   Container image "k8s.gcr.io/coredns:1.3.1" already present on machine
  Normal   Created           48m (x3 over 49m)      kubelet, kmaster   Created container coredns
  Normal   Started           48m (x3 over 49m)      kubelet, kmaster   Started container coredns
  Warning  DNSConfigForming  24m (x139 over 49m)    kubelet, kmaster   Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.50.50.50 10.50.10.50 2001:4898::1050:1050
  Warning  BackOff           4m23s (x217 over 49m)  kubelet, kmaster   Back-off restarting failed container

Hey @Ashutosh, you have a warning message:

Warning  FailedScheduling  52m (x301 over 82m)    default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

You don't have any worker nodes that are connected to your master. 

A shortcut would be to remove the taint. Try this for removing the taint:

kubectl taint nodes--all node-role.kubernetes.io/master-
You will face this iisue, when your /etc/hosts file is not pointing to google dns servers..

In your /etc/hosts file you should have entry like below

nameserver 8.8.8.8

add the above line and restart network-manager service using below command

sudo service network-manager restart

Related Questions In Kubernetes

0 votes
1 answer

Error saying "The specified bucket does not exist" in kubernetes

Bucket is created in another region. Looks like ...READ MORE

answered Aug 31, 2018 in Kubernetes by Kalgi
• 41,590 points
351 views
0 votes
2 answers

NoSuchBucket error when running Kubernetes on AWS

It was a bug on their part. ...READ MORE

answered Sep 4, 2018 in Kubernetes by Nilesh
• 6,880 points
38 views
0 votes
1 answer

Installing Kubernetes on aws gives an error

seems like you are missing region. Try ...READ MORE

answered Oct 4, 2018 in Kubernetes by Kalgi
• 41,590 points
31 views
0 votes
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17 in Others by anonymous
2,568 views
+3 votes
1 answer