Node Affinity In Kubernetes

Shubham Agarwal
4 min readJan 8, 2022

--

The primary purpose of the node Affinity feature is to ensure that pods are hosted on particular nodes. It provides us with advanced capabilities to limit the pod placement on specific nodes.

For eg. let’s say we have different kinds of workloads running in our cluster and we would like to dedicate, the data processing workloads that require higher horsepower to the node that is configured with high or medium resources, and here the node affinity comes into play.

1.1
  • pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
name: dbapp
spec:
containers:
- name: dbapp
image: db-processor
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: In
values:
- Large
- Medium

So in the Pod Definition file, under the spec section we have affinity and then node Affinity and under that, we have a property “requiredDuringSchedulingIgnoredDuringExecution” and then we have the nodeSelectorTerms that is an array and that is where we’ll specify the key and value pairs.

The Key-value pairs are in the form of key, operator and value, where the operator is “In”. The Operator ensures that the pod will be placed on a node whose label size has any value in the list of values specified.

To label a node, run the following command:

kubectl label nodes <node-name> <label-key>=<label-value>In our case:kubectl label nodes node-01 size=large

We can also use the “NotIn” operator to say, something like size “NotIn” small, where node affinity will match the node with a size not set to small.

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: NotIn
values:
- Small

As per above figure “1.1”, we have only set the label on large and medium nodes, the smaller node doesn’t even have the label set. So we don’t really have to even check the value of the label, as long as we are sure we don’t set the label size to the smaller nodes, using the “Exists” operator will give us the same result as “NotIn” in our case.

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: Exists

The “Exists” operator will simply check if the label “size” exists on the nodes and we don’t need the value section for that, as it does not compare the values.

The node affinity syntax supports the following operators: In, NotIn, Exists, DoesNotExist, Gt, Lt, etc. Refer to assign-pod-node for more details.

Now we understand all of this and we are comfortable with creating a pod with specific affinity rules. When the pods are created these rules are considered and the pods are placed onto the right nodes.

So There are currently 2 types of node affinity available:

requiredDuringSchedulingIgnoreDuringExecution

preferredDuringSchedulingIgnoreDuringExecution

And there are additional types of node affinity Planned:

requiredDuringSchedulingRequiredDuringExecution

Let’s understand the 2 available affinity types:

There are two states in the lifecycle of a pod, when considering the node affinity, during scheduling and during execution.

During scheduling is the state where a Pod does not exist and is created for the first time and when its first created the affinity rules specified are considered to place the pod on the right node.

Now, what if the nodes with matching labels are not available Or we forgot to label the nodes as large (in our case)? This is where the node affinity comes into play.

If we select the required type (the first affinity), the scheduler will mandate that the pod be placed on a node with given affinity rules. If it cannot find the matching node, the pod will not be scheduled. This type will be used in cases where the placement of the pod is crucial.

But let’s say the pod placement is less important than running the workload itself, in that case, we can set the affinity to the preferred type (the 2nd affinity) and in cases where a matching node is not found, the scheduler will simply ignore node affinity rules and place the Pods on any available node. The preferred one is a way of telling the scheduler, hey try your best to place the pod on the matching node but if you really cannot find one just place it anywhere.

The second state is During Execution. It is a state where a pod has been running and a change is made in the environment that affects node affinity, such as a change in the label of a node.

i.e, an administrator removed the label we set earlier from the node (size = large). Now, what would happen to the pods that are running on the node?

In both types of the available node affinity, pods will continue to run and any changes in node affinity will not impact them once they are scheduled.

The new type expected in the future, only have a difference in the during execution state. A new option is called RequiredDuringExecution is introduced, which will evict any pod that is running on nodes that do not meet affinity rules.

Thanks for reading!!!

Refer following articles for more insights on Kubernetes:-

Storage Drivers in Docker

How kubectl apply command works?

Kubernetes Services for Absolute Beginners — NodePort

Kubernetes Services for Absolute Beginners — ClusterIP

Kubernetes Services for Absolute Beginners — LoadBalancer

labels-and-selectors-in-kubernetes

Kubernetes workflow for Absolute Beginners

--

--

Shubham Agarwal
Shubham Agarwal

Written by Shubham Agarwal

Site Reliability Engineer, have 5 years of experience in IT support and Operations

Responses (1)