Skip to main content
Version: Early Access

Scheduling Pods with Node Selectors, Node Affinity and Tolerations

Kubernetes provides flexible mechanisms to control pod scheduling across your cluster nodes. This guide explores three powerful options for scheduling pods to specific nodes: node selectors, node affinity, and tolerations. Each approach offers different levels of control and complexity to help you optimize your workload placement.

Option 1 - Node Selector

Node Selector provides the simplest way to constrain pods to nodes with specific labels. It works by matching a single key-value pair label assigned to nodes.

To add a label to a node (for example, node1) with key name and value node:

kubectl label nodes node1 name=node1

Common convention is to use a prefix in the key that indicates the organization or project followed by a descriptive name, such as digital.ai/name=node1. The labels kubernetes.io/hostname or k8s.io/hostname reflect the node's hostname and are reserved Kubernetes prefixes/keys for system components.

caution

It is not recommended to modify or rely on default labels like kubernetes.io/hostname since they are managed by Kubernetes and any manual changes could cause unexpected behavior.

Example

nodeSelector:
name: "node1"

When to Use

Use Node Selector when you have a simple requirement to schedule pods to a specific set of nodes based on a single label. The node labels should have been set already at the cluster level to be used in the Kubernetes manifests.

Option 2 - Node Affinity

Node Affinity offers a more advanced and flexible way of controlling pod placement. It supports:

  • RequiredDuringSchedulingIgnoredDuringExecution: Pods must be placed on nodes matching the specified criteria
  • PreferredDuringSchedulingIgnoredDuringExecution: Pods prefer nodes matching the criteria, but it's not mandatory

Example

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "node1"
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: region
operator: In
values:
- "us-east-1"

When to Use

Use Node Affinity for complex scheduling requirements, such as matching multiple labels or expressing preferences for nodes.

Option 3 - Tolerations

Tolerations allow pods to run on nodes with specific matching taints. The pods "tolerate" the taints applied to nodes. For example, a pod will be allowed to be scheduled on nodes that have taint key1=value1:NoSchedule if it has the corresponding toleration.

Example

tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"

When to Use

Use Tolerations on nodes when you want to allow only specific pods to be scheduled on them, such as nodes with specific hardware requirements or during maintenance windows.

Example Configuration

Here's how you can configure node selectors, node affinity, and tolerations in your cr.yaml:

spec:
nodeSelector:
name: "node1"

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "node1"
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: region
operator: In
values:
- "us-east-1"

tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"

You can specify these configurations for other components as well, including PostgreSQL, RabbitMQ, NGINX Ingress Controller, and HAProxy Ingress.

note

While you can use all three methods together for fine-grained control over pod scheduling, typically one method is sufficient for most use cases.