Tasks

Step-by-step instructions for performing operations with Kubernetes.

Documentation for Kubernetes v1.7 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Schedule GPUs

Kubernetes includes experimental support for managing NVIDIA GPUs spread across nodes. This page describes how users can consume GPUs and the current limitations.

Before you begin

  1. Kubernetes nodes have to be pre-installed with Nvidia drivers. Kubelet will not detect Nvidia GPUs otherwise. Try to re-install Nvidia drivers if kubelet fails to expose Nvidia GPUs as part of Node Capacity. After installing the driver, run nvidia-docker-plugin to confirm that all drivers have been loaded.
  2. A special alpha feature gate Accelerators has to be set to true across the system: --feature-gates="Accelerators=true".
  3. Nodes must be using docker engine as the container runtime.

The nodes will automatically discover and expose all Nvidia GPUs as a schedulable resource.

API

Nvidia GPUs can be consumed via container level resource requirements using the resource name alpha.kubernetes.io/nvidia-gpu.

apiVersion: v1
kind: Pod 
metadata:
  name: gpu-pod
spec: 
  containers: 
    - 
      name: gpu-container-1
      image: gcr.io/google_containers/pause:2.0
      resources: 
        limits: 
          alpha.kubernetes.io/nvidia-gpu: 2 # requesting 2 GPUs
    -
      name: gpu-container-2
      image: gcr.io/google_containers/pause:2.0
      resources: 
        limits: 
          alpha.kubernetes.io/nvidia-gpu: 3 # requesting 3 GPUs

If your nodes are running different versions of GPUs, then use Node Labels and Node Selectors to schedule pods to appropriate GPUs. Following is an illustration of this workflow:

As part of your Node bootstrapping, identify the GPU hardware type on your nodes and expose it as a node label.

NVIDIA_GPU_NAME=$(nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 | sed -e 's/ /-/g')
source /etc/default/kubelet
KUBELET_OPTS="$KUBELET_OPTS --node-labels='alpha.kubernetes.io/nvidia-gpu-name=$NVIDIA_GPU_NAME'"
echo "KUBELET_OPTS=$KUBELET_OPTS" > /etc/default/kubelet

Specify the GPU types a pod can use via Node Affinity rules.

kind: pod
apiVersion: v1
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/affinity: >
      {
        "nodeAffinity": {
          "requiredDuringSchedulingIgnoredDuringExecution": {
            "nodeSelectorTerms": [
              {
                "matchExpressions": [
                  {
                    "key": "alpha.kubernetes.io/nvidia-gpu-name",
                    "operator": "In",
                    "values": ["Tesla K80", "Tesla P100"]
                  }
                ]
              }
            ]
          }
        }
      }
spec:
  containers:
    -
      name: gpu-container-1
      resources:
        limits:
          alpha.kubernetes.io/nvidia-gpu: 2

This will ensure that the pod will be scheduled to a node that has a Tesla K80 or a Tesla P100 Nvidia GPU.

Warning

The API presented here will change in an upcoming release to better support GPUs, and hardware accelerators in general, in Kubernetes.

Access to CUDA libraries

As of now, CUDA libraries are expected to be pre-installed on the nodes.

To mitigate this, you can copy the libraries to a more permissive folder in /var/lib/ or change the permissions directly. (Future releases will automatically perform this operation)

Pods can access the libraries using hostPath volumes.

kind: Pod
apiVersion: v1
metadata:
  name: gpu-pod
spec:
  containers:
  - name: gpu-container-1
    image: gcr.io/google_containers/pause:2.0
    resources:
      limits:
        alpha.kubernetes.io/nvidia-gpu: 1
    volumeMounts:
    - mountPath: /usr/local/nvidia/bin
      name: bin
    - mountPath: /usr/lib/nvidia
      name: lib
  volumes:
  - hostPath:
      path: /usr/lib/nvidia-375/bin
    name: bin
  - hostPath:
      path: /usr/lib/nvidia-375
    name: lib

Future

Analytics

Create an Issue Edit this Page