This page covers how to get started with deploying Kubernetes on vSphere and details for how to configure the vSphere Cloud Provider.
Kubernetes comes with vSphere Cloud Provider, a cloud provider for vSphere that allows Kubernetes Pods to use enterprise grade vSphere Storage.
To deploy Kubernetes on vSphere and use the vSphere Cloud Provider, see Kubernetes-Anywhere.
Detailed steps can be found at the getting started with Kubernetes-Anywhere on vSphere page.
vSphere Cloud Provider allows Kubernetes to use vSphere managed enterprise grade storage. It supports:
For more detail visit vSphere Storage for Kubernetes Documentation.
Documentation for how to use vSphere managed storage can be found in the persistent volumes user guide and the volumes user guide.
Examples can be found here.
If a Kubernetes cluster has not been deployed using Kubernetes-Anywhere, follow the instructions below to enable the vSphere Cloud Provider. These steps are not needed when using Kubernetes-Anywhere, they will be done as part of the deployment.
Step-1 Create a VM folder and move Kubernetes Node VMs to this folder.
Step-2 Make sure Node VM names must comply with the regex [a-z](([-0-9a-z]+)?[0-9a-z])?(\.[a-z0-9](([-0-9a-z]+)?[0-9a-z])?)*
. If Node VMs do not comply with this regex, rename them and make it compliant to this regex.
Node VM names constraints:
.
and -
.Step-3 Enable disk UUID on Node virtual machines.
The disk.EnableUUID parameter must be set to “TRUE” for each Node VM. This step is necessary so that the VMDK always presents a consistent UUID to the VM, thus allowing the disk to be mounted properly.
For each of the virtual machine nodes that will be participating in the cluster, follow the steps below using GOVC tool
Set up GOVC environment
export GOVC_URL='vCenter IP OR FQDN'
export GOVC_USERNAME='vCenter User'
export GOVC_PASSWORD='vCenter Password'
export GOVC_INSECURE=1
Find Node VM Paths
govc ls /datacenter/vm/<vm-folder-name>
Set disk.EnableUUID to true for all VMs
govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
Note: If Kubernetes Node VMs are created from template VM then disk.EnableUUID=1
can be set on the template VM. VMs cloned from this template, will automatically inherit this property.
Step-4 Create and assign Roles to the vSphere Cloud Provider user and vSphere entities.
Note: if you want to use Administrator account then this step can be skipped.
vSphere Cloud Provider requires the following minimal set of privileges to interact with vCenter. Please refer vSphere Documentation Center to know about steps for creating a Custom Role, User and Role Assignment.
Roles | Privileges | Entities | Propagate to Children |
---|---|---|---|
manage-k8s-node-vms | Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete |
Cluster, Hosts, VM Folder |
Yes |
manage-k8s-volumes | Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View |
Datastore | No |
k8s-system-read-and-spbm-profile-view | StorageProfile.View System.Anonymous System.Read System.View |
vCenter | No |
ReadOnly | System.Anonymous System.Read System.View |
Datacenter, Datastore Cluster, Datastore Storage Folder |
No |
Step-5 Create the vSphere cloud config file (vsphere.conf
). Cloud config template can be found here.
This config file needs to be placed in the shared directory which should be accessible from kubelet container, controller-manager pod, and API server pod.
vsphere.conf
for Master Node:
[Global]
user = "vCenter username for cloud provider"
password = "password"
server = "IP/FQDN for vCenter"
port = "443" #Optional
insecure-flag = "1" #set to 1 if the vCenter uses a self-signed cert
datacenter = "Datacenter name"
datastore = "Datastore name" #Datastore to use for provisioning volumes using storage classes/dynamic provisioning
working-dir = "vCenter VM folder path in which node VMs are located"
vm-name = "VM name of the Master Node" #Optional
vm-uuid = "UUID of the Node VM" # Optional
[Disk]
scsicontrollertype = pvscsi
Note: vm-name
parameter is introduced in 1.6.4 release. Both vm-uuid
and vm-name
are optional parameters. If vm-name
is specified then vm-uuid
is not used. If both are not specified then kubelet will get vm-uuid from /sys/class/dmi/id/product_serial
and query vCenter to find the Node VM’s name.
vsphere.conf
for Worker Nodes: (Only Applicable to 1.6.4 release and above. For older releases this file should have all the parameters specified in Master node’s vSphere.conf
file).
[Global]
vm-name = "VM name of the Worker Node"
Below is summary of supported parameters in the vsphere.conf
file
user
is the vCenter username for vSphere Cloud Provider.password
is the password for vCenter user specified with user
.server
is the vCenter Server IP or FQDNport
is the vCenter Server Port. Default is 443 if not specified.insecure-flag
is set to 1 if vCenter used a self-signed certificate.datacenter
is the name of the datacenter on which Node VMs are deployed.datastore
is the default datastore to use for provisioning volumes using storage classes/dynamic provisioning.vm-name
is recently added configuration parameter. This is optional parameter. When this parameter is present, vsphere.conf
file on the worker node does not need vCenter credentials.
Note: vm-name
is added in the release 1.6.4. Prior releases does not support this parameter.
working-dir
can be set to empty ( working-dir = “”), if Node VMs are located in the root VM folder.vm-uuid
is the VM Instance UUID of virtual machine. vm-uuid
can be set to empty (vm-uuid = ""
). If set to empty, this will be retrieved from /sys/class/dmi/id/product_serial file on virtual machine (requires root access).
vm-uuid
needs to be set in this format - 423D7ADC-F7A9-F629-8454-CE9615C810F1
vm-uuid
can be retrieved from Node Virtual machines using following command. This will be different on each node VM.
cat /sys/class/dmi/id/product_serial | sed -e 's/^VMware-//' -e 's/-/ /' | awk '{ print toupper($1$2$3$4 "-" $5$6 "-" $7$8 "-" $9$10 "-" $11$12$13$14$15$16) }'
datastore
is the default datastore used for provisioning volumes using storage classes. If datastore is located in storage folder or datastore is member of datastore cluster, make sure to specify full datastore path. Make sure vSphere Cloud Provider user has Read Privilege set on the datastore cluster or storage folder to be able to find datastore.
For datastore located in the datastore cluster, specify datastore as mentioned below
datastore = "DatastoreCluster/datastore1"
For datastore located in the storage folder, specify datastore as mentioned below
datastore = "DatastoreStorageFolder/datastore1"
Step-6 Add flags to controller-manager, API server and Kubelet to enable vSphere Cloud Provider. * Add following flags to kubelet running on every node and to the controller-manager and API server pods manifest files.
--cloud-provider=vsphere
--cloud-config=<Path of the vsphere.conf file>
Manifest files for API server and controller-manager are generally located at /etc/kubernetes/manifests
.
Step-7 Restart Kubelet on all nodes.
systemctl daemon-reload
systemctl restart kubelet.service
Note: After enabling the vSphere Cloud Provider, Node names will be set to the VM names from the vCenter Inventory.
Please visit known issues for the list of major known issues with Kubernetes vSphere Cloud Provider.
For quick support please join VMware Code Slack (kubernetes) and post your question.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
Vmware vSphere | Kube-anywhere | Photon OS | Flannel | docs | Community (@abrarshivani), (@kerneltime), (@BaluDontu), (@luomiao), (@divyenpatel) |
If you identify any issues/problems using the vSphere cloud provider, you can create an issue in our repo - VMware Kubernetes.
For support level information on all solutions, see the Table of solutions chart.
Create an Issue Edit this Page