Daemon Sets in Kubernetes

Shubham Agarwal
4 min readJan 21, 2022

--

Daemon sets are like ReplicaSets in Kubernetes, as it helps us to deploy multiple instances of a pod same as ReplicaSets but in a different manner.

Where ReplicaSets helps us to deploy multiple instances of a pod as per our requirements, DeamonSets ensure that one copy of the pod is always present on each node of our Kubernetes cluster.

Whenever a new node is added to the cluster, a replica of the Pod is automatically added to that node and when a node is removed, the Pod also gets removed automatically.

Kubernetes DaemonSet

Use Cases of DaemonSets?

So a question can come to our mind, when ReplicaSet is able to fulfill our requirements (of Multiple instances of our application), why the DaemonSets are required?

Let’s say you want to deploy a monitoring agent like Splunk or Humio or any other logging application, on each of your nodes in the cluster so you can monitor the cluster in a better way.

And here the ReplicaSet does not give you a guarantee that on each node the logging application will present, but Daemon Sets ensure that the logging pod is always present on every single node.

That’s why the DaemonSets are perfect to deploy the monitoring agent in the form of Pod in all the nodes in your cluster.

In other use cases, we can consider the kube-proxy. As it’s a required Kubernetes component on every node in the cluster. The kube-proxy component can be deployed as DaemonSets in the cluster.

Or we can use Daemonsets to configure the Networking Pods (like Flannel) on every node in the cluster.

DaemonSets definition file:

Creating a DaemonSets is similar to the ReplicaSet creation process, It has nested Pod specification under the template section and selectors to link the daemon sets to the Pods.

We just need to change the kind to DaemonSet instead of ReplicaSet in the definition file

apiVersion: v1
kind: DaemonSet
metadata:
name: splunk-monitoring-agent
spec:
selector:
matchLabels:
app: logging
template:
metadata:
labels:
app: logging
spec:
containers:
- name: splunk-monitoring-agent
image: splunk:latest
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- target-host-name

Once the definition file is ready, create the DaemonSet using the below command:

# kubectl create -f daemonset-definition.yaml

To get and verify the Daemonset in the Kubernetes cluster:

# kubectl get daemonset -n kube-system
or
# kubectl get ds -n kube-system
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-system kube-proxy 12 12 12 12 12 kubernetes.io/os=linux 2d

How does a DaemonSets work?

How does a Daemonset schedule Pod on each node and how does it ensure that every node has a Pod?

A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, DaemonSet pods are created and scheduled by the DaemonSet controller instead. That introduces the following issues:

  • Inconsistent Pod behavior: Normal Pods waiting to be scheduled are created and in Pending state, but DaemonSet pods are not created in Pending state. This is confusing to the user.
  • Pod preemption is handled by the default scheduler. When preemption is enabled, the DaemonSet controller will make scheduling decisions without considering pod priority and preemption.

ScheduleDaemonSetPods allows you to schedule DaemonSets using the default scheduler instead of the DaemonSet controller, by adding the NodeAffinity term to the DaemonSet pods, instead of the .spec.nodeName term.

The default scheduler is then used to bind the pod to the target host. If the node affinity of the DaemonSet pod already exists, it is replaced (the original node affinity was taken into account before selecting the target host). The DaemonSet controller only performs these operations when creating or modifying DaemonSet pods, and no changes are made to the spec.template of the DaemonSet.

Updating a DaemonSet:

If node labels are changed, the DaemonSet will promptly add Pods to newly matching nodes and delete Pods from newly not-matching nodes.

You can modify the Pods that a DaemonSet creates. However, Pods do not allow all fields to be updated. Also, the DaemonSet controller will use the original template the next time a node (even with the same name) is created.

You can delete a DaemonSet. If you specify --cascade=orphan with kubectl, then the Pods will be left on the nodes. If you subsequently create a new DaemonSet with the same selector, the new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces them according to its updateStrategy.

Thanks for reading!!!

Refer following articles for more insights on Kubernetes:-

Node Selectors in Kubernetes

Storage Drivers in Docker

How kubectl apply command works?

Kubernetes Services for Absolute Beginners — NodePort

Kubernetes Services for Absolute Beginners — ClusterIP

Kubernetes Services for Absolute Beginners — LoadBalancer

labels-and-selectors-in-kubernetes

Kubernetes workflow for Absolute Beginners

Special Thanks to Mumshad Mannambeth

--

--

Shubham Agarwal
Shubham Agarwal

Written by Shubham Agarwal

Site Reliability Engineer, have 5 years of experience in IT support and Operations

No responses yet