Error saying no endpoints available for service kubernetes-dashboard while getting the dashboard up

0 votes

I’m trying to start the dashboard so I followed the instruction on the official page.

kubectl proxy
Starting to serve on 127.0.0.1:8001

upon browsing to the above IP:PORT, <h3>Unauthorized</h3> is served. so, i suffix /ui to the URI, and we get:

// 127.0.0.1:8001/ui redirected to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}



$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.3.0.1     <none>        443/TCP   36m

When everything is working fine why does it say no endpoints available for service \"kubernetes-dashboard\"

Sep 25, 2018 in Kubernetes by Hannah
• 18,570 points
24,082 views
Execute the kubectl get nodes command and check if all the worker nodes have joined the cluster.
It does not show any worker nodes, I think that’ll be the issue. But I don’t know why the nodes haven’t joined the cluster yet.

2 answers to this question.

0 votes

If you check kubectl get nodes, there’s only one service that is running. The cluster does not have nodes so the ui service has no place to land and run.

For further details, refer to the Kubernetes Training.

answered Sep 25, 2018 by Kalgi
• 52,360 points
+1 vote

Not able to access dashboard it says: 

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

kube-master@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE      NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-694687c474-vk74d   0/1     Pending   0          33m   <none>          <none>    <none>           <none>
kube-system   coredns-86c58d9df4-qxgs6                   0/1     Pending   0          36m   <none>          <none>    <none>           <none>
kube-system   coredns-86c58d9df4-s7kfr                   0/1     Pending   0          36m   <none>          <none>    <none>           <none>
kube-system   etcd-kmaster                               1/1     Running   0          36m   172.30.250.79   kmaster   <none>           <none>
kube-system   kube-apiserver-kmaster                     1/1     Running   0          36m   172.30.250.79   kmaster   <none>           <none>
kube-system   kube-controller-manager-kmaster            1/1     Running   0          36m   172.30.250.79   kmaster   <none>           <none>
kube-system   kube-proxy-bfp5c                           1/1     Running   0          36m   172.30.250.79   kmaster   <none>           <none>
kube-system   kube-scheduler-kmaster                     1/1     Running   0          36m   172.30.250.79   kmaster   <none>           <none>
kube-system   kubernetes-dashboard-57df4db6b-rqrj5       0/1     Pending   0          30m   <none>          <none>    <none>           <none>

kube-master@kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://172.30.250.79:6443
KubeDNS is running at https://172.30.250.79:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

answered Jan 10, 2019 by vishal

Hey @Vishal, Your dashboard isn't working because your coredns, calico and dashboard pods haven't been created yet, they are stuck in pending state. 

Execute this and show me the logs

kubectl cluster-info dump > kubernetes-dump.log

Run the beneath command to deploy network.

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Now check the status and try for dashboard access
Thanks! Will try and update you.
Please check the logs

},
            "spec": {
                "volumes": [
                    {
                        "name": "kubeconfig",
                        "hostPath": {
                            "path": "/etc/kubernetes/scheduler.conf",
                            "type": "FileOrCreate"
                        }
                    }
                ],
                "containers": [
                    {
                        "name": "kube-scheduler",
                        "image": "k8s.gcr.io/kube-scheduler:v1.13.1",
                        "command": [
                            "kube-scheduler",
                            "--address=127.0.0.1",
                            "--kubeconfig=/etc/kubernetes/scheduler.conf",
                            "--leader-elect=true"
                        ],
                        "resources": {
                            "requests": {
                                "cpu": "100m"
                            }
                        },
                        "volumeMounts": [
                            {
                                "name": "kubeconfig",
                                "readOnly": true,
                                "mountPath": "/etc/kubernetes/scheduler.conf"
                            }
                        ],
                        "livenessProbe": {
                            "httpGet": {
                                "path": "/healthz",
                                "port": 10251,
                                "host": "127.0.0.1",
                                "scheme": "HTTP"
                            },
                            "initialDelaySeconds": 15,
                            "timeoutSeconds": 15,
                            "periodSeconds": 10,
                            "successThreshold": 1,
                            "failureThreshold": 8
                        },
                        "terminationMessagePath": "/dev/termination-log",
                        "terminationMessagePolicy": "File",
                        "imagePullPolicy": "IfNotPresent"
                    }
                ],
                "restartPolicy": "Always",
                "terminationGracePeriodSeconds": 30,
                "dnsPolicy": "ClusterFirst",
                "nodeName": "kmaster",
                "hostNetwork": true,
                "securityContext": {},
                "schedulerName": "default-scheduler",
                "tolerations": [
                    {
                        "operator": "Exists",
                        "effect": "NoExecute"
                    }
                ],
                "priorityClassName": "system-cluster-critical",
                "priority": 2000000000,
                "enableServiceLinks": true
            },
            "status": {
                "phase": "Running",
                "conditions": [
                    {
                        "type": "Initialized",
                        "status": "True",
                        "lastProbeTime": null,
                        "lastTransitionTime": "2019-01-11T03:22:12Z"
                    },
                    {
                        "type": "Ready",
                        "status": "True",
                        "lastProbeTime": null,
                        "lastTransitionTime": "2019-01-11T03:22:14Z"
                    },
                    {
                        "type": "ContainersReady",
                        "status": "True",
                        "lastProbeTime": null,
                        "lastTransitionTime": "2019-01-11T03:22:14Z"
                    },
                    {
                        "type": "PodScheduled",
                        "status": "True",
                        "lastProbeTime": null,
                        "lastTransitionTime": "2019-01-11T03:22:12Z"
                    }
                ],
                "hostIP": "172.30.250.79",
                "podIP": "172.30.250.79",
                "startTime": "2019-01-11T03:22:12Z",
                "containerStatuses": [
                    {
                        "name": "kube-scheduler",
                        "state": {
                            "running": {
                                "startedAt": "2019-01-11T03:22:13Z"
                            }
                        },
                        "lastState": {
                            "terminated": {
                                "exitCode": 255,
                                "reason": "Error",
                                "startedAt": "2019-01-10T09:06:41Z",
                                "finishedAt": "2019-01-11T03:22:09Z",
                                "containerID": "docker://11d35d2c6d2e4763256acb0945742fb1a10685737e9302635e30fe3c03faab41"
                            }
                        },
                        "ready": true,
                        "restartCount": 1,
                        "image": "k8s.gcr.io/kube-scheduler:v1.13.1",
                        "imageID": "docker-pullable://k8s.gcr.io/kube-scheduler@sha256:4165e5f0d569b5b5e3bd90d78c30c5408b2c938d719939490299ab4cee9a9c0f",
                        "containerID": "docker://be764f479d372c1450868c2e0d43292b02b4881b0a66dc07b2a3276f5c89a7cf"
                    }
                ],
                "qosClass": "Burstable"
            }
        },
        {
            "metadata": {
                "name": "kubernetes-dashboard-57df4db6b-rqrj5",
                "generateName": "kubernetes-dashboard-57df4db6b-",
                "namespace": "kube-system",
                "selfLink": "/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-57df4db6b-rqrj5",
                "uid": "1641e487-14b8-11e9-a1e0-080027838d49",
                "resourceVersion": "1076",
                "creationTimestamp": "2019-01-10T09:14:09Z",
                "labels": {
                    "k8s-app": "kubernetes-dashboard",
                    "pod-template-hash": "57df4db6b"
                },
                "ownerReferences": [
                    {
                        "apiVersion": "apps/v1",
                        "kind": "ReplicaSet",
                        "name": "kubernetes-dashboard-57df4db6b",
                        "uid": "1631243d-14b8-11e9-a1e0-080027838d49",
                        "controller": true,
                        "blockOwnerDeletion": true
                    }
                ]
            },
            "spec": {
                "volumes": [
                    {
                        "name": "kubernetes-dashboard-certs",
                        "secret": {
                            "secretName": "kubernetes-dashboard-certs",
                            "defaultMode": 420
                        }
                    },
                    {
                        "name": "tmp-volume",
                        "emptyDir": {}
                    },
                    {
                        "name": "kubernetes-dashboard-token-pbm52",
                        "secret": {
                            "secretName": "kubernetes-dashboard-token-pbm52",
                            "defaultMode": 420
                        }
                    }
                ],
                "containers": [
                    {
                        "name": "kubernetes-dashboard",
                        "image": "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1",
                        "args": [
                            "--auto-generate-certificates"
                        ],
                        "ports": [
                            {
                                "containerPort": 8443,
                                "protocol": "TCP"
                            }
                        ],
                        "resources": {},
                        "volumeMounts": [
                            {
                                "name": "kubernetes-dashboard-certs",
                                "mountPath": "/certs"
                            },
                            {
                                "name": "tmp-volume",
                                "mountPath": "/tmp"
                            },
                            {
                                "name": "kubernetes-dashboard-token-pbm52",
                                "readOnly": true,
                                "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
                            }
                        ],
                        "livenessProbe": {
                            "httpGet": {
                                "path": "/",
                                "port": 8443,
                                "scheme": "HTTPS"
                            },
                            "initialDelaySeconds": 30,
                            "timeoutSeconds": 30,
                            "periodSeconds": 10,
                            "successThreshold": 1,
                            "failureThreshold": 3
                        },
                        "terminationMessagePath": "/dev/termination-log",
                        "terminationMessagePolicy": "File",
                        "imagePullPolicy": "IfNotPresent"
                    }
                ],
                "restartPolicy": "Always",
                "terminationGracePeriodSeconds": 30,
                "dnsPolicy": "ClusterFirst",
                "serviceAccountName": "kubernetes-dashboard",
                "serviceAccount": "kubernetes-dashboard",
                "securityContext": {},
                "schedulerName": "default-scheduler",
                "tolerations": [
                    {
                        "key": "node-role.kubernetes.io/master",
                        "effect": "NoSchedule"
                    },
                    {
                        "key": "node.kubernetes.io/not-ready",
                        "operator": "Exists",
                        "effect": "NoExecute",
                        "tolerationSeconds": 300
                    },
                    {
                        "key": "node.kubernetes.io/unreachable",
                        "operator": "Exists",
                        "effect": "NoExecute",
                        "tolerationSeconds": 300
                    }
                ],
                "priority": 0,
                "enableServiceLinks": true
            },
            "status": {
                "phase": "Pending",
                "conditions": [
                    {
                        "type": "PodScheduled",
                        "status": "False",
                        "lastProbeTime": null,
                        "lastTransitionTime": "2019-01-10T09:14:09Z",
                        "reason": "Unschedulable",
                        "message": "0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate."
                    }
                ],
                "qosClass": "BestEffort"
            }
        }
    ]
}
==== START logs for container calico-kube-controllers of pod kube-system/calico-kube-controllers-694687c474-vk74d ====
==== END logs for container calico-kube-controllers of pod kube-system/calico-kube-controllers-694687c474-vk74d ====
==== START logs for container coredns of pod kube-system/coredns-86c58d9df4-qxgs6 ====
==== END logs for container coredns of pod kube-system/coredns-86c58d9df4-qxgs6 ====
==== START logs for container coredns of pod kube-system/coredns-86c58d9df4-s7kfr ====
==== END logs for container coredns of pod kube-system/coredns-86c58d9df4-s7kfr ====
==== START logs for container etcd of pod kube-system/etcd-kmaster ====
2019-01-11 03:22:13.892970 I | etcdmain: etcd Version: 3.2.24
2019-01-11 03:22:13.893421 I | etcdmain: Git SHA: 420a45226
2019-01-11 03:22:13.893426 I | etcdmain: Go Version: go1.8.7
2019-01-11 03:22:13.893430 I | etcdmain: Go OS/Arch: linux/amd64
2019-01-11 03:22:13.893435 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-01-11 03:22:13.958147 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-01-11 03:22:13.958327 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-01-11 03:22:13.999723 I | embed: listening for peers on https://172.30.250.79:2380
2019-01-11 03:22:13.999922 I | embed: listening for client requests on 127.0.0.1:2379
2019-01-11 03:22:13.999985 I | embed: listening for client requests on 172.30.250.79:2379
2019-01-11 03:22:14.029015 I | etcdserver: name = kmaster
2019-01-11 03:22:14.029092 I | etcdserver: data dir = /var/lib/etcd
2019-01-11 03:22:14.029108 I | etcdserver: member dir = /var/lib/etcd/member
2019-01-11 03:22:14.029117 I | etcdserver: heartbeat = 100ms
2019-01-11 03:22:14.029243 I | etcdserver: election = 1000ms
2019-01-11 03:22:14.029273 I | etcdserver: snapshot count = 10000
2019-01-11 03:22:14.029292 I | etcdserver: advertise client URLs = https://172.30.250.79:2379
2019-01-11 03:22:14.538548 I | etcdserver: restarting member 6cc328837e81a84f in cluster 18e28d31f4d5a732 at commit index 7459
2019-01-11 03:22:14.539300 I | raft: 6cc328837e81a84f became follower at term 2
2019-01-11 03:22:14.539538 I | raft: newRaft 6cc328837e81a84f [peers: [], term: 2, commit: 7459, applied: 0, lastindex: 7459, lastterm: 2]
2019-01-11 03:22:14.887046 I | mvcc: restore compact to 5966
2019-01-11 03:22:14.963195 W | auth: simple token is not cryptographically signed
2019-01-11 03:22:15.188665 I | etcdserver: starting server... [version: 3.2.24, cluster version: to_be_decided]
2019-01-11 03:22:15.383967 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-01-11 03:22:15.407736 I | etcdserver/membership: added member 6cc328837e81a84f [https://172.30.250.79:2380] to cluster 18e28d31f4d5a732
2019-01-11 03:22:15.408041 N | etcdserver/membership: set the initial cluster version to 3.2
2019-01-11 03:22:15.408280 I | etcdserver/api: enabled capabilities for version 3.2
2019-01-11 03:22:16.850518 I | raft: 6cc328837e81a84f is starting a new election at term 2
2019-01-11 03:22:16.850780 I | raft: 6cc328837e81a84f became candidate at term 3
2019-01-11 03:22:16.850999 I | raft: 6cc328837e81a84f received MsgVoteResp from 6cc328837e81a84f at term 3
2019-01-11 03:22:16.851125 I | raft: 6cc328837e81a84f became leader at term 3
2019-01-11 03:22:16.851199 I | raft: raft.node: 6cc328837e81a84f elected leader 6cc328837e81a84f at term 3
2019-01-11 03:22:16.854681 I | etcdserver: published {Name:kmaster ClientURLs:[https://172.30.250.79:2379]} to cluster 18e28d31f4d5a732
2019-01-11 03:22:16.854966 I | embed: ready to serve client requests
2019-01-11 03:22:16.855550 I | embed: ready to serve client requests
2019-01-11 03:22:16.856015 I | embed: serving client requests on 127.0.0.1:2379
2019-01-11 03:22:16.856475 I | embed: serving client requests on 172.30.250.79:2379
2019-01-11 03:25:02.050454 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-qxgs6\" " with result "range_response_count:1 size:1340" took too long (150.80435ms) to execute
2019-01-11 03:25:02.051212 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-qxgs6\" " with result "range_response_count:1 size:1340" took too long (171.723504ms) to execute
2019-01-11 03:25:02.056133 W | etcdserver: read-only range request "key:\"/registry/pods/\" range_end:\"/registry/pods0\" " with result "range_response_count:9 size:18378" took too long (174.424417ms) to execute
2019-01-11 03:25:06.464215 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-qxgs6\" " with result "range_response_count:1 size:1340" took too long (123.660771ms) to execute
2019-01-11 03:25:06.464671 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-qxgs6\" " with result "range_response_count:1 size:1340" took too long (124.013841ms) to execute
2019-01-11 03:32:18.976409 I | mvcc: store.index: compact 7077
2019-01-11 03:32:19.157702 I | mvcc: finished scheduled compaction at 7077 (took 175.851829ms)
==== END logs for container etcd of pod kube-system/etcd-kmaster ====
==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kmaster ====
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0111 03:22:14.669565       1 server.go:557] external host was not specified, using 172.30.250.79
I0111 03:22:14.669874       1 server.go:146] Version: v1.13.1
I0111 03:22:15.394952       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 03:22:15.394969       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 03:22:15.397464       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 03:22:15.397479       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 03:22:16.958393       1 master.go:228] Using reconciler: lease
W0111 03:22:17.836374       1 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 03:22:17.966945       1 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 03:22:17.976004       1 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 03:22:17.985408       1 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 03:22:18.054003       1 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2019/01/11 03:22:18 log.go:33: [restful/swagger] listing is available at https://172.30.250.79:6443/swaggerapi
[restful] 2019/01/11 03:22:18 log.go:33: [restful/swagger] https://172.30.250.79:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/01/11 03:22:19 log.go:33: [restful/swagger] listing is available at https://172.30.250.79:6443/swaggerapi
[restful] 2019/01/11 03:22:19 log.go:33: [restful/swagger] https://172.30.250.79:6443/swaggerui/ is mapped to folder /swagger-ui/
I0111 03:22:19.461049       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 03:22:19.461070       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 03:22:22.567896       1 secure_serving.go:116] Serving securely on [::]:6443
I0111 03:22:22.567944       1 controller.go:84] Starting OpenAPI AggregationController
I0111 03:22:22.568161       1 available_controller.go:283] Starting AvailableConditionController
I0111 03:22:22.568220       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0111 03:22:22.568777       1 crd_finalizer.go:242] Starting CRDFinalizer
I0111 03:22:22.577951       1 autoregister_controller.go:136] Starting autoregister controller
I0111 03:22:22.578066       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0111 03:22:22.586171       1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0111 03:22:22.586366       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0111 03:22:22.586431       1 customresource_discovery_controller.go:203] Starting DiscoveryController
I0111 03:22:22.586481       1 naming_controller.go:284] Starting NamingConditionController
I0111 03:22:22.586528       1 establishing_controller.go:73] Starting EstablishingController
I0111 03:22:22.586577       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0111 03:22:22.586619       1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I0111 03:22:22.768436       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0111 03:22:22.780314       1 cache.go:39] Caches are synced for autoregister controller
I0111 03:22:22.786558       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0111 03:22:22.786808       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I0111 03:22:23.587988       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 03:22:42.657236       1 controller.go:608] quota admission added evaluator for: endpoints
==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-kmaster ====
==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kmaster ====
Flag --address has been deprecated, see --bind-address instead.
I0111 03:22:15.407154       1 serving.go:318] Generated self-signed cert in-memory
I0111 03:22:16.135367       1 controllermanager.go:151] Version: v1.13.1
I0111 03:22:16.136850       1 secure_serving.go:116] Serving securely on [::]:10257
I0111 03:22:16.137333       1 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252
I0111 03:22:16.137730       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
E0111 03:22:22.666171       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
I0111 03:22:42.659978       1 leaderelection.go:214] successfully acquired lease kube-system/kube-controller-manager
I0111 03:22:42.661208       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"194a8a04-14b7-11e9-a1e0-080027838d49", APIVersion:"v1", ResourceVersion:"6590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kmaster_1856277d-1550-11e9-8add-080027838d49 became leader
I0111 03:22:42.679359       1 plugins.go:103] No cloud provider specified.
I0111 03:22:42.681542       1 controller_utils.go:1027] Waiting for caches to sync for tokens controller
W0111 03:22:42.717556       1 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
I0111 03:22:42.717967       1 controllermanager.go:516] Started "garbagecollector"
I0111 03:22:42.718462       1 garbagecollector.go:133] Starting garbage collector controller
I0111 03:22:42.718471       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0111 03:22:42.718484       1 graph_builder.go:308] GraphBuilder running
I0111 03:22:42.741431       1 controllermanager.go:516] Started "job"
I0111 03:22:42.741614       1 job_controller.go:143] Starting job controller
I0111 03:22:42.741854       1 controller_utils.go:1027] Waiting for caches to sync for job controller
I0111 03:22:42.751651       1 controllermanager.go:516] Started "replicaset"
I0111 03:22:42.751908       1 replica_set.go:182] Starting replicaset controller
I0111 03:22:42.752047       1 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
I0111 03:22:42.765366       1 controllermanager.go:516] Started "statefulset"
I0111 03:22:42.767008       1 stateful_set.go:151] Starting stateful set controller
I0111 03:22:42.767054       1 controller_utils.go:1027] Waiting for caches to sync for stateful set controller
I0111 03:22:42.780188       1 controllermanager.go:516] Started "clusterrole-aggregation"
I0111 03:22:42.780407       1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0111 03:22:42.780440       1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
I0111 03:22:42.781905       1 controller_utils.go:1034] Caches are synced for tokens controller
I0111 03:22:42.793460       1 controllermanager.go:516] Started "pv-protection"
I0111 03:22:42.793710       1 pv_protection_controller.go:81] Starting PV protection controller
I0111 03:22:42.793971       1 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
I0111 03:22:42.805329       1 controllermanager.go:516] Started "endpoint"
I0111 03:22:42.806015       1 endpoints_controller.go:149] Starting endpoint controller
I0111 03:22:42.806138       1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
I0111 03:22:42.828922       1 controllermanager.go:516] Started "namespace"
I0111 03:22:42.829262       1 namespace_controller.go:186] Starting namespace controller
I0111 03:22:42.829315       1 controller_utils.go:1027] Waiting for caches to sync for namespace controller
I0111 03:22:42.844989       1 controllermanager.go:516] Started "disruption"
I0111 03:22:42.845165       1 disruption.go:288] Starting disruption controller
I0111 03:22:42.845183       1 controller_utils.go:1027] Waiting for caches to sync for disruption controller
I0111 03:22:42.868445       1 controllermanager.go:516] Started "cronjob"
W0111 03:22:42.868468       1 core.go:155] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W0111 03:22:42.868473       1 controllermanager.go:508] Skipping "route"
I0111 03:22:42.868510       1 cronjob_controller.go:92] Starting CronJob Manager
I0111 03:22:43.163562       1 controllermanager.go:516] Started "attachdetach"
I0111 03:22:43.163619       1 attach_detach_controller.go:315] Starting attach detach controller
I0111 03:22:43.163722       1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
I0111 03:22:43.323592       1 controllermanager.go:516] Started "deployment"
I0111 03:22:43.323742       1 deployment_controller.go:152] Starting deployment controller
I0111 03:22:43.323843       1 controller_utils.go:1027] Waiting for caches to sync for deployment controller
I0111 03:22:43.467090       1 controllermanager.go:516] Started "csrapproving"
I0111 03:22:43.467142       1 certificate_controller.go:113] Starting certificate controller
I0111 03:22:43.467149       1 controller_utils.go:1027] Waiting for caches to sync for certificate controller
I0111 03:22:43.622621       1 node_lifecycle_controller.go:272] Sending events to api server.
I0111 03:22:43.622925       1 node_lifecycle_controller.go:312] Controller is using taint based evictions.
I0111 03:22:43.623253       1 taint_manager.go:175] Sending events to api server.
I0111 03:22:43.623580       1 node_lifecycle_controller.go:378] Controller will taint node by condition.
I0111 03:22:43.623829       1 controllermanager.go:516] Started "nodelifecycle"
I0111 03:22:43.623880       1 node_lifecycle_controller.go:423] Starting node controller
I0111 03:22:43.623887       1 controller_utils.go:1027] Waiting for caches to sync for taint controller
I0111 03:22:43.768175       1 controllermanager.go:516] Started "replicationcontroller"
I0111 03:22:43.768225       1 replica_set.go:182] Starting replicationcontroller controller
I0111 03:22:43.768232       1 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
I0111 03:22:44.363285       1 controllermanager.go:516] Started "horizontalpodautoscaling"
I0111 03:22:44.363461       1 horizontal.go:156] Starting HPA controller
I0111 03:22:44.363584       1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
E0111 03:22:44.518938       1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 03:22:44.518961       1 controllermanager.go:508] Skipping "service"
I0111 03:22:44.678565       1 controllermanager.go:516] Started "persistentvolume-binder"
I0111 03:22:44.678622       1 pv_controller_base.go:271] Starting persistent volume controller
I0111 03:22:44.678629       1 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
I0111 03:22:44.819219       1 controllermanager.go:516] Started "serviceaccount"
I0111 03:22:44.819325       1 serviceaccounts_controller.go:115] Starting service account controller
I0111 03:22:44.819393       1 controller_utils.go:1027] Waiting for caches to sync for service account controller
I0111 03:22:44.963910       1 controllermanager.go:516] Started "csrcleaner"
I0111 03:22:44.964106       1 cleaner.go:81] Starting CSR cleaner controller
I0111 03:22:45.117856       1 controllermanager.go:516] Started "bootstrapsigner"
W0111 03:22:45.117882       1 controllermanager.go:508] Skipping "ttl-after-finished"
I0111 03:22:45.118187       1 controller_utils.go:1027] Waiting for caches to sync for bootstrap_signer controller
I0111 03:22:45.271599       1 controllermanager.go:516] Started "ttl"
I0111 03:22:45.271681       1 ttl_controller.go:116] Starting TTL controller
I0111 03:22:45.271744       1 controller_utils.go:1027] Waiting for caches to sync for TTL controller
I0111 03:22:45.413506       1 node_ipam_controller.go:99] Sending events to api server.
I0111 03:22:55.421936       1 range_allocator.go:78] Sending events to api server.
I0111 03:22:55.422184       1 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses.
I0111 03:22:55.422314       1 range_allocator.go:108] Node kmaster has CIDR 172.30.0.0/24, occupying it in CIDR map
I0111 03:22:55.422506       1 controllermanager.go:516] Started "nodeipam"
I0111 03:22:55.422767       1 node_ipam_controller.go:168] Starting ipam controller
I0111 03:22:55.422850       1 controller_utils.go:1027] Waiting for caches to sync for node controller
I0111 03:22:55.431764       1 controllermanager.go:516] Started "pvc-protection"
I0111 03:22:55.431885       1 pvc_protection_controller.go:99] Starting PVC protection controller
I0111 03:22:55.431893       1 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
I0111 03:22:55.440576       1 controllermanager.go:516] Started "podgc"
I0111 03:22:55.440685       1 gc_controller.go:76] Starting GC controller
I0111 03:22:55.440692       1 controller_utils.go:1027] Waiting for caches to sync for GC controller
I0111 03:22:55.466355       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator

TBC
for statefulsets.apps
I0111 03:22:55.466615       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0111 03:22:55.466757       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0111 03:22:55.466879       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0111 03:22:55.467091       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0111 03:22:55.467338       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0111 03:22:55.467494       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
I0111 03:22:55.467647       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0111 03:22:55.467706       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0111 03:22:55.467952       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0111 03:22:55.468099       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0111 03:22:55.468237       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
I0111 03:22:55.468375       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0111 03:22:55.468535       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0111 03:22:55.468765       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
I0111 03:22:55.469016       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0111 03:22:55.469325       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0111 03:22:55.469443       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0111 03:22:55.469641       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0111 03:22:55.469688       1 shared_informer.go:311] resyncPeriod 55660433504067 is smaller than resyncCheckPeriod 81156028471685 and the informer has already started. Changing it to 81156028471685
I0111 03:22:55.469957       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0111 03:22:55.470206       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0111 03:22:55.470437       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
E0111 03:22:55.470573       1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0111 03:22:55.470663       1 controllermanager.go:516] Started "resourcequota"
I0111 03:22:55.470739       1 resource_quota_controller.go:276] Starting resource quota controller
I0111 03:22:55.470969       1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0111 03:22:55.471085       1 resource_quota_monitor.go:301] QuotaMonitor running
I0111 03:22:55.482674       1 controllermanager.go:516] Started "csrsigning"
I0111 03:22:55.482915       1 certificate_controller.go:113] Starting certificate controller
I0111 03:22:55.482928       1 controller_utils.go:1027] Waiting for caches to sync for certificate controller
I0111 03:22:55.492895       1 controllermanager.go:516] Started "persistentvolume-expander"
I0111 03:22:55.493285       1 expand_controller.go:153] Starting expand controller
I0111 03:22:55.493339       1 controller_utils.go:1027] Waiting for caches to sync for expand controller
I0111 03:22:55.513775       1 controllermanager.go:516] Started "daemonset"
I0111 03:22:55.513880       1 daemon_controller.go:269] Starting daemon sets controller
I0111 03:22:55.513886       1 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
I0111 03:22:55.522606       1 controllermanager.go:516] Started "tokencleaner"
W0111 03:22:55.522891       1 controllermanager.go:508] Skipping "root-ca-cert-publisher"
I0111 03:22:55.522833       1 tokencleaner.go:116] Starting token cleaner controller
I0111 03:22:55.525667       1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller
I0111 03:22:55.549649       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0111 03:22:55.595890       1 controller_utils.go:1034] Caches are synced for certificate controller
I0111 03:22:55.595993       1 controller_utils.go:1034] Caches are synced for expand controller
I0111 03:22:55.596386       1 controller_utils.go:1034] Caches are synced for PV protection controller
I0111 03:22:55.610127       1 controller_utils.go:1034] Caches are synced for endpoint controller
I0111 03:22:55.621120       1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller
I0111 03:22:55.621624       1 controller_utils.go:1034] Caches are synced for service account controller
I0111 03:22:55.623927       1 controller_utils.go:1034] Caches are synced for deployment controller
I0111 03:22:55.628486       1 controller_utils.go:1034] Caches are synced for token_cleaner controller
I0111 03:22:55.629998       1 controller_utils.go:1034] Caches are synced for namespace controller
I0111 03:22:55.632269       1 controller_utils.go:1034] Caches are synced for PVC protection controller
I0111 03:22:55.642588       1 controller_utils.go:1034] Caches are synced for job controller
I0111 03:22:55.643494       1 controller_utils.go:1034] Caches are synced for GC controller
I0111 03:22:55.654809       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
I0111 03:22:55.664411       1 controller_utils.go:1034] Caches are synced for HPA controller
I0111 03:22:55.667613       1 controller_utils.go:1034] Caches are synced for certificate controller
I0111 03:22:55.667691       1 controller_utils.go:1034] Caches are synced for stateful set controller
I0111 03:22:55.668833       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
I0111 03:22:55.680770       1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
I0111 03:22:55.749792       1 controller_utils.go:1034] Caches are synced for disruption controller
I0111 03:22:55.749924       1 disruption.go:296] Sending events to api server.
W0111 03:22:55.933924       1 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kmaster" does not exist
I0111 03:22:55.964000       1 controller_utils.go:1034] Caches are synced for attach detach controller
I0111 03:22:55.971692       1 controller_utils.go:1034] Caches are synced for resource quota controller
I0111 03:22:55.971989       1 controller_utils.go:1034] Caches are synced for TTL controller
I0111 03:22:55.978885       1 controller_utils.go:1034] Caches are synced for persistent volume controller
I0111 03:22:56.014755       1 controller_utils.go:1034] Caches are synced for daemon sets controller
I0111 03:22:56.023297       1 controller_utils.go:1034] Caches are synced for node controller
I0111 03:22:56.023538       1 range_allocator.go:157] Starting range CIDR allocator
I0111 03:22:56.023553       1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller
I0111 03:22:56.023988       1 controller_utils.go:1034] Caches are synced for taint controller
I0111 03:22:56.024051       1 taint_manager.go:198] Starting NoExecuteTaintManager
I0111 03:22:56.024099       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:22:56.024417       1 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kmaster", UID:"174b814f-14b7-11e9-a1e0-080027838d49", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kmaster event: Registered Node kmaster in Controller
W0111 03:22:56.038609       1 node_lifecycle_controller.go:895] Missing timestamp for Node kmaster. Assuming now as a timestamp.
I0111 03:22:56.118771       1 controller_utils.go:1034] Caches are synced for garbage collector controller
I0111 03:22:56.118852       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0111 03:22:56.124239       1 controller_utils.go:1034] Caches are synced for cidrallocator controller
I0111 03:22:56.149809       1 controller_utils.go:1034] Caches are synced for garbage collector controller
E0111 03:22:56.960104       1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0111 03:23:01.043898       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:06.044118       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:11.044408       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:16.056129       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:21.056634       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:26.057475       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:31.057753       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:36.058176       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:41.059132       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:46.059393       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:51.059716       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:23:56.060366       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:01.061359       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:06.061792       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:11.068935       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:16.069371       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:21.069859       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:26.070102       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:31.070372       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:36.070696       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:41.070878       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:46.071223       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:51.071484       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:24:56.075625       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:01.238191       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:06.240345       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:11.241409       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:16.241734       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:21.241861       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:26.242139       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:31.242392       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:36.244142       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:41.244719       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:46.245402       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:51.245594       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:25:56.247133       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:01.247786       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:06.249350       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:11.249752       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:16.250194       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:21.251455       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:26.251656       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:31.254622       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:36.263821       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:41.267262       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:46.267429       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:51.267765       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:26:56.268095       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:01.269448       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:06.270143       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:11.270499       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:16.271220       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:21.272690       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:26.277765       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:31.278427       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:36.283906       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:41.287916       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:46.288504       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:51.295631       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:27:56.296044       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:01.296929       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:06.297431       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:11.298630       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:16.299459       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:21.303228       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:26.304257       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:31.304556       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:36.305307       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:41.306132       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:46.306596       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:51.306954       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:28:56.307887       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:01.308395       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:06.308823       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:11.315719       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:16.319408       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:21.326271       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:26.326800       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:31.327353       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:36.327881       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:41.328802       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:46.329263       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:51.332954       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:29:56.334422       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:01.339435       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:06.342396       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:11.345082       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:16.345411       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:21.345928       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:26.348695       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:31.351485       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:36.366694       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:41.367668       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:46.369022       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:51.369565       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:30:56.369831       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:01.370389       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:06.370898       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:11.371202       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:16.371689       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:21.373294       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:26.374228       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:31.374697       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:36.377371       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:41.377647       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:46.378228       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:51.378632       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:31:56.386133       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:01.386608       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:06.386902       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:11.387576       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:16.388016       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:21.388613       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:26.389801       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:31.390187       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:36.391087       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:41.391677       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:46.392013       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:51.392298       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:32:56.393151       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:01.393630       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:06.393928       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:11.394176       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:16.394758       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:21.401412       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:26.407493       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:31.408181       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:36.408696       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:41.409059       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:46.420388       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:51.420869       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:33:56.421422       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:01.421793       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:06.422397       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:11.422843       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:16.423353       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:21.423775       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:26.424268       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:31.425201       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:36.425836       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:41.426197       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:46.426461       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:51.426841       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:34:56.427022       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:01.427574       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:06.427990       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:11.428421       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:16.429326       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:21.433174       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:26.433401       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
I0111 03:35:31.433845       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone:
==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kmaster ====
==== START logs for container kube-proxy of pod kube-system/kube-proxy-bfp5c ====
W0111 03:22:25.308564       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0111 03:22:25.352952       1 server_others.go:148] Using iptables Proxier.
I0111 03:22:25.353338       1 server_others.go:178] Tearing down inactive rules.
I0111 03:22:25.632414       1 server.go:464] Version: v1.13.1
I0111 03:22:25.648937       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0111 03:22:25.648986       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0111 03:22:25.650554       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0111 03:22:25.659840       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0111 03:22:25.660120       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0111 03:22:25.661722       1 config.go:102] Starting endpoints config controller
I0111 03:22:25.661738       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0111 03:22:25.661772       1 config.go:202] Starting service config controller
I0111 03:22:25.661777       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0111 03:22:25.763620       1 controller_utils.go:1034] Caches are synced for service config controller
I0111 03:22:25.763620       1 controller_utils.go:1034] Caches are synced for endpoints config controller
==== END logs for container kube-proxy of pod kube-system/kube-proxy-bfp5c ====
==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-kmaster ====
I0111 03:22:15.211942       1 serving.go:318] Generated self-signed cert in-memory
W0111 03:22:15.852888       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authenticationwon't work.
W0111 03:22:15.852908       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0111 03:22:15.852917       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0111 03:22:15.860444       1 server.go:150] Version: v1.13.1
I0111 03:22:15.860487       1 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0111 03:22:15.862281       1 authorization.go:47] Authorization is disabled
W0111 03:22:15.862297       1 authentication.go:55] Authentication is disabled
I0111 03:22:15.862304       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on 127.0.0.1:10251
I0111 03:22:15.873170       1 secure_serving.go:116] Serving securely on [::]:10259
E0111 03:22:22.675926       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0111 03:22:22.676195       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0111 03:22:22.676310       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0111 03:22:22.676433       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0111 03:22:22.676516       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0111 03:22:22.676603       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at thecluster scope
E0111 03:22:22.676668       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0111 03:22:22.683564       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0111 03:22:22.683564       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0111 03:22:22.683811       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0111 03:22:24.577304       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0111 03:22:24.677520       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0111 03:22:24.677574       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0111 03:22:43.119664       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
I0111 03:25:01.869576       1 trace.go:76] Trace[22457868]: "Scheduling kube-system/coredns-86c58d9df4-qxgs6" (started: 2019-01-11 03:25:01.70765014 +0000 UTC m=+166.366306429) (total time: 152.679723ms):
Trace[22457868]: [152.679723ms] [152.66517ms] END
==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-kmaster ====
==== START logs for container kubernetes-dashboard of pod kube-system/kubernetes-dashboard-57df4db6b-rqrj5 ====
==== END logs for container kubernetes-dashboard of pod kube-system/kubernetes-dashboard-57df4db6b-rqrj5 ====
{
    "kind": "EventList",
    "apiVersion": "v1",
    "metadata": {
        "selfLink": "/api/v1/namespaces/default/events",
        "resourceVersion": "7777"
    },
    "items": [
        {
            "metadata": {
                "name": "kmaster.1578ad38df4e92ad",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad38df4e92ad",
                "uid": "1d182f38-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6537",
                "creationTimestamp": "2019-01-11T03:22:24Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "Starting",
            "message": "Starting kubelet.",
            "source": {
                "component": "kubelet",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:11Z",
            "lastTimestamp": "2019-01-11T03:22:11Z",
            "count": 1,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad38ea798e35",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad38ea798e35",
                "uid": "1d199cbe-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6563",
                "creationTimestamp": "2019-01-11T03:22:24Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "NodeHasSufficientMemory",
            "message": "Node kmaster status is now: NodeHasSufficientMemory",
            "source": {
                "component": "kubelet",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:11Z",
            "lastTimestamp": "2019-01-11T03:22:12Z",
            "count": 8,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad38ea79c3a1",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad38ea79c3a1",
                "uid": "1d1a2133-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6560",
                "creationTimestamp": "2019-01-11T03:22:24Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "NodeHasNoDiskPressure",
            "message": "Node kmaster status is now: NodeHasNoDiskPressure",
            "source": {
                "component": "kubelet",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:11Z",
            "lastTimestamp": "2019-01-11T03:22:12Z",
            "count": 7,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad38ea79dc0d",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad38ea79dc0d",
                "uid": "1d1a9acb-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6562",
                "creationTimestamp": "2019-01-11T03:22:24Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "NodeHasSufficientPID",
            "message": "Node kmaster status is now: NodeHasSufficientPID",
            "source": {
                "component": "kubelet",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:11Z",
            "lastTimestamp": "2019-01-11T03:22:12Z",
            "count": 8,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad38edd19b5a",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad38edd19b5a",
                "uid": "1d1ec885-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6544",
                "creationTimestamp": "2019-01-11T03:22:24Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "NodeAllocatableEnforced",
            "message": "Updated Node Allocatable limit across pods",
            "source": {
                "component": "kubelet",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:12Z",
            "lastTimestamp": "2019-01-11T03:22:12Z",
            "count": 1,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad3c1ac2cfe0",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad3c1ac2cfe0",
                "uid": "1e06a94a-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6555",
                "creationTimestamp": "2019-01-11T03:22:25Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "kmaster"
            },
            "reason": "Starting",
            "message": "Starting kube-proxy.",
            "source": {
                "component": "kube-proxy",
                "host": "kmaster"
            },
            "firstTimestamp": "2019-01-11T03:22:25Z",
            "lastTimestamp": "2019-01-11T03:22:25Z",
            "count": 1,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        },
        {
            "metadata": {
                "name": "kmaster.1578ad432c93dfa9",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/events/kmaster.1578ad432c93dfa9",
                "uid": "301d1ebb-1550-11e9-b1c8-080027838d49",
                "resourceVersion": "6614",
                "creationTimestamp": "2019-01-11T03:22:56Z"
            },
            "involvedObject": {
                "kind": "Node",
                "name": "kmaster",
                "uid": "174b814f-14b7-11e9-a1e0-080027838d49"
            },
            "reason": "RegisteredNode",
            "message": "Node kmaster event: Registered Node kmaster in Controller",
            "source": {
                "component": "node-controller"
            },
            "firstTimestamp": "2019-01-11T03:22:56Z",
            "lastTimestamp": "2019-01-11T03:22:56Z",
            "count": 1,
            "type": "Normal",
            "eventTime": null,
            "reportingComponent": "",
            "reportingInstance": ""
        }
    ]
}
{
    "kind": "ReplicationControllerList",
    "apiVersion": "v1",
    "metadata": {
        "selfLink": "/api/v1/namespaces/default/replicationcontrollers",
        "resourceVersion": "7777"
    },
    "items": []
}
{
    "kind": "ServiceList",
    "apiVersion": "v1",
    "metadata": {
        "selfLink": "/api/v1/namespaces/default/services",
        "resourceVersion": "7777"
    },
    "items": [
        {
            "metadata": {
                "name": "kubernetes",
                "namespace": "default",
                "selfLink": "/api/v1/namespaces/default/services/kubernetes",
                "uid": "177156f5-14b7-11e9-a1e0-080027838d49",
                "resourceVersion": "33",
                "creationTimestamp": "2019-01-10T09:07:01Z",
                "labels": {
                    "component": "apiserver",
                    "provider": "kubernetes"
                }
            },
            "spec": {
                "ports": [
                    {
                        "name": "https",
                        "protocol": "TCP",
                        "port": 443,
                        "targetPort": 6443
                    }
                ],
                "clusterIP": "10.96.0.1",
                "type": "ClusterIP",
                "sessionAffinity": "None"
            },
            "status": {
                "loadBalancer": {}
            }
        }
    ]
}
{
    "kind": "DaemonSetList",
    "apiVersion": "apps/v1",
    "metadata": {
        "selfLink": "/apis/apps/v1/namespaces/default/daemonsets",
        "resourceVersion": "7777"
    },
    "items": []
}
{
    "kind": "DeploymentList",
    "apiVersion": "apps/v1",
    "metadata": {
        "selfLink": "/apis/apps/v1/namespaces/default/deployments",
        "resourceVersion": "7777"
    },
    "items": []
}
{
    "kind": "ReplicaSetList",
    "apiVersion": "apps/v1",
    "metadata": {
        "selfLink": "/apis/apps/v1/namespaces/default/replicasets",
        "resourceVersion": "7777"
    },
    "items": []
}
{
    "kind": "PodList",
    "apiVersion": "v1",
    "metadata": {
        "selfLink": "/api/v1/namespaces/default/pods",
        "resourceVersion": "7777"
    },
    "items": []
}
Cluster info dumped to standard output
Thanks for the quick response.

Issue has been resolved now but when i am trying to get token from below command its shows me nothing.

kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

Below the command and result:

kube-master@kmaster:~$ kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
kube-master@kmaster:~$

Check existing secrets in kube-system get secret

kubectl -n kube-system get secret

NAME                                     TYPE                                  DATA      AGE
attachdetach-controller-token-xw1tw      kubernetes.io/service-account-token   3         10d
bootstrap-signer-token-gz8qp             kubernetes.io/service-account-token   3         10d
bootstrap-token-f46476                   bootstrap.kubernetes.io/token         5         10d
certificate-controller-token-tp34m       kubernetes.io/service-account-token   3         10d
daemon-set-controller-token-fqvwx        kubernetes.io/service-account-token   3         10d
kubernetes-dashboard-token-7qmbc         kubernetes.io/service-account-token   3         6d

this will list all of them down and then use the following command to describe the details of the secret you need which will give you the token

kubectl -n kube-system describe secrets kubernetes-dashboard-token-7qmbc

Name: kubernetes-dashboard-token-7qmbc
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=Kubernetes-dashboard
kubernetes.io/service-account.uid=d0d93741-96c5-11e7-8245-901b0e532516

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpX--------------

Hey @Vishal, reset your cluster and start creating again, keep these things in mind:

Master node and worker nodes:

2GB RAM and 2 core CPU

Use flannel or weave instead of calico as you CNI Network.

According to the logs, you have an error 

1 node(s) had taints that the pod didn't tolerate.

You get that when you're scheduler is unable to schedule the pod but that's obvious because your pods aren't ready yet.

You can remove the taint using this command but chose this are you last option as its not a good practice. What it will do is, it'll start deploying pods on your master instead of your worker nodes which we do not want. 

kubectl taint nodes--all node-role.kubernetes.io/master-
How did you resolve the issue?

Related Questions In Kubernetes

0 votes
1 answer
0 votes
1 answer

Error while creating kubernetes dashboard

The installation fails because there is no ...READ MORE

answered Aug 27, 2018 in Kubernetes by Kalgi
• 52,360 points
1,554 views
0 votes
1 answer

Error saying "The specified bucket does not exist" in kubernetes

Bucket is created in another region. Looks like ...READ MORE

answered Aug 31, 2018 in Kubernetes by Kalgi
• 52,360 points
3,768 views
0 votes
3 answers

Error while setting up kubernetes

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables READ MORE

answered May 16, 2019 in Kubernetes by sivashankar
16,211 views
+4 votes
1 answer

Installing Web UI (Dashboard):kubernetes-dashboard on main Ubuntu 16.04.6 LTS (Xenial Xerus) server

Follow these steps: $ kubeadm reset $ kubeadm init ...READ MORE

answered Apr 12, 2019 in Kubernetes by Kalgi
• 52,360 points

reshown Apr 12, 2019 by Kalgi 5,977 views
+1 vote
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17, 2019 in Others by anonymous
14,524 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP