My world is usually vSphere based, by which I mean my Kubernetes deployments, Java / Spring apps and virtual machines all sit on the same hypervisor backed by the same consistent storage and network regardless of the higher levels of abstraction. However, sometimes I need to venture out into the cloud to do things…
Luckily deploying a Tanzu Kubernetes cluster on vSphere or in AWS is precisely the same no matter where I deploy it. Once I have my control plane deployed; built-in for vSphere 7 but an easy deploy on AWS using the TKG CLI or WebWizard. It’s just a case of
tkg create cluster my-cluster -p prod -c 3 -w 5
And grab a coffee; Now I have my Kubernetes control plane everything is smooth sailing and abstracted.. well not entirely, for a K8S cluster to be useful it does really need some form of data persistence. I use VSAN and the VMware CNS provider for my home lab but on AWS it is a little different; luckily not too complicated.
AWS IAM permissions
If I were deploying OSS K8S directly onto EC2 instances, there is some complexity attaching the appropriate IAM permissions to both worker and control plane EC2 instances. For reference, this is what would be required. Luckily I’m using TKG, and this is handled for me as part of the cluster setup.
Creating a StorageClass
A K8S StorageClass lets the Kubernetes admin describe the different ‘classes’ storage available to any given Kubernetes cluster. These could map to if the storage was say encrypted at rest or high performance, perhaps if the storage has some kind of automatic backup applied. For vSphere CNS the storage class would also map to a vSphere Storage Policy. For AWS the standard type or storage is gp2 which is a type of AWS EBS (Elastic Block Storage).
Here’s an example of defining EBS:
Once I have a storage class, I can define a Persistent Volume, a PV is an actual piece of storage on some storage backed by a ‘storage array’ (represented by the StorageClass). This is what’s used to mount a persistent volume into a Kubernetes POD i.e it has a name that an application would explicitly ask for ( or the app could pick up the default storage for the k8s cluster ). It would also have the size of the storage and the access mode (ReadWriteOnce / ReadWriteMany / ReadOnlyMany) - EBS only support ReadWriteOnce. (the volume can be mounted as read-write by a single worker node)
AWS will dynamically create an EBS Volume for me if I don’t manually create one if I try and make a PVC without a PV defined. This is mostly okay unless I want to explicitly set up things like IOPS or Encryption. In which case, I need to define the PV inside my cluster once I’ve created the EBS Volume in AWS. I’m using the dynamic method but for reference here a sample yaml to create the PV.
A PVC is part of my application definition asking to make a claim on that storage; using either the dynamic or PV. In my spec section, I’m using the storage class name that results in the dynamic provisioning of the volume on EBS.
Sample using a selector which maps to the persistent volume example :
Either way, I now have a StorageClass, Persistent Volume (maybe) and a Persistent Volume Claim for my application to store data onto.
It is worth mentioning that a PV lifecycle is not connected to a POD lifecycle. If a POD is re-deployed, the PVC would be reattached to the re-scheduled POD.
Now for a quick test... what else but a temporary WordPress site. I’ve wrapped a few things into the two deployments files. Front-end and back-end are separate yaml files and because I’m using the dynamic provisioner you would need to clean up the PV’s if you want to re-deploy (the PV’s are set up the retain data rather than delete).
Make sure we set up the my-sql password :
kubectl create secret generic mysql-pass --from-literal=password=‘fred'
Now lets deploy the backend from this :
Once that’s up we can do the front-end
If you notice, one of the nice things about K8S is being able to combine several configuration files (.yaml) into a single file if you want to. Equally, you can break them out into separate files and simply perform a kubectl apply -f ./ on the whole subdirectory.
Check its all working :
… Okay.. So how would I do all this on vSphere with Tanzu Kubernetes Grid?
Simple.. remember that storage class we defined right at the top of this article.. yes?… change just that file, keep the class name (standard) and use the VMware CNS provisioner instead of the AWS one; job done.