No pods created after launching a DaemonSet

0 votes

I'm trying to create a DaemonSet on a cluster of 6 nodes. It shows that the DaemonSet has been deployed successfully but there are no pod being scheduled on the nodes:

> ic describe ds
Name:       dd-agent
apiVersion: extensions/v1beta1
Image(s):   datadog/docker-dd-agent:kubernetes
Selector:   app=dd-agent,name=dd-agent,version=v1
Node-Selector:  <none>
Labels:     release=stable,tech=datadog,tier=backend
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.

I'm using AWS for this and created the cluster using kube-aws. I'm already running about 30 pods across these 6 nodes.

  • CoreOS alpha (891.0.0)
  • Kubernetes server v1.1.2
  • Updated the /etc/kubernetes/manifest/kube-apiserver.manifest to enable DaemonSets by adding --runtime-config=extensions/v1beta1/daemonsets=true

I've tried restarting kubelet and even the used daemon-reload command but to no effect. Please help.

Dec 28, 2018 in Kubernetes by Damon Salvatore
• 5,980 points

1 answer to this question.

0 votes
Kubelet service restart won't help you with restarting the pods being managed. DaemonSets are managed by contoller manager. So you need to check if the feature is enabled in apiserver, so make sure that the apiserver is running with the alpha extensions and then restart the controller manager.
answered Dec 28, 2018 by ajs3033
• 7,280 points

Related Questions In Kubernetes

0 votes
1 answer

Pods IP address from inside a container in the pod

Make sure that your pod yaml file ...READ MORE

answered Aug 29, 2018 in Kubernetes by Kalgi
• 52,370 points
0 votes
1 answer

No cluster folder created while installing Kubernetes

Download the latest release of kubernetes, once ...READ MORE

answered Sep 5, 2018 in Kubernetes by Kalgi
• 52,370 points
0 votes
1 answer

How to fail a (cron) job after a certain number of retries?

You're trying to set 3 asbackoffLimit of your Job. ...READ MORE

answered Sep 17, 2018 in Kubernetes by Kalgi
• 52,370 points
+1 vote
1 answer
0 votes
1 answer

permissions related to AWS ECR

if you add allowContainerRegistry: true, kops will add those permissions ...READ MORE

answered Oct 9, 2018 in Kubernetes by Kalgi
• 52,370 points
0 votes
1 answer

How can I stop a replicaset from restarting after deleting?

You need to delete the corresponding deployment. ...READ MORE

answered Dec 31, 2018 in Kubernetes by ajs3033
• 7,280 points
0 votes
1 answer

Pods stuck in terminating state after deleting ReplicationController

Delete the pods by FORCE, You should.<(-.-)> kubectl ...READ MORE

answered Jan 4, 2019 in Kubernetes by ajs3033
• 7,280 points