52305/what-impact-uprading-kubelet-leave-the-pods-the-worker-nodes
Restarting kubelet, which has to happen for an upgrade will cause all the Pods on the node to stop and be started again.
It’s generally better to drain a node because that way Pods can be gracefully migrated, and things like Disruption Budgets can be honored.
The problem is that `kubectl` keeps up with the state of all running pods, so when it goes away the containers don’t necessarily die, but as soon as it comes back up, they are all killed so `kubectl` can create a clean slate.
As kubelet communicates with the apiserver, so if something happens in between of upgrade process, rescheduling of pods may take place and health checks may fail in between the process.
During the restart, the kubelet will stop querying the API, so it won’t start/stop containers, and Heapster won’t be able to fetch system metrics from cAdvisor.
Just make sure it’s not down for too long or the node will be removed from the cluster!
There is no difference between running a ...READ MORE
Not really. There are a few things ...READ MORE
since K8s 1.15 kubectl rollout restart do the ...READ MORE
When a node is tainted, the pods ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Try using ingress itself in this manner except ...READ MORE
Hi Kalgi after following above steps it ...READ MORE
Follow these steps: $ kubeadm reset $ kubeadm init ...READ MORE
Kubernetes is a combination of multiple parts ...READ MORE
You could use kubectl get deployment <deployment>. If ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.