The acronym stands for ‘virtual Storage Area Network’. This is a storage service that is provided by in-kernel communication of ESXi clusters with locally attached disks. Data is stored as policy-based storage objects in the vSAN datastore for consumers, like virtual machines and containers.
I love vSAN especially because of its versatility. Since I first came in touch with this future forward solution in 2014 right after its inception and the end of life of vSA (vSphere Storage Appliance). I thoroughly enjoyed learning the ins and outs of its capabilities. Soon thereafter, we identified the initial use-case within the company, more on that later.
The company provided health services all over Europe with strict SLAs. Our team, the system operation center, was responsible for the IT infrastructure of many locations in Germany and the main data centers.
… consequently, my production-ready journey with this technology began.
vSphere with Tanzu, the easy and integrated way to use Kubernetes in Enterprise environments, is getting a lot of traction currently. One of the main benefits of this solution is the transparent way to consume already existing storage resources.
So, this article describes the different possibilities and essential features that enable consuming persistent storage in your container applications based on Kubernetes.
The Tanzu Way
In fact, Tanzu arrives in different editions. Enterprise Plus is mandatory for your ESXi base cluster. In addition, an add-on, with currently three available Tanzu editions: Basic, Standard and Advanced, makes everything possible. Then you enable Tanzu Workload Management in vCenter.
Thus, some requirements exist, like a supported & configured networking and load balancing solution. Furthermore, a lot of different architectural possibilities and design decisions have to be resolved.
Anyway, you need storage resources to provide persistent storage for on one hand your supervisor cluster and on the other your workload clusters for your modern application landscape.
Tough Tanzu means you can operate virtual machines besides Kubernetes clusters with the same interface, resources, and transparency like you have done it for years. Finally, this is the way to your on-premise hybrid cloud environment.
vSphere Storage Resources
Basically, all types of shared storage in vSphere are also supported in Tanzu. On one hand, you got the NFS Shares (NAS), FC or iSCSI LUNs (SAN), the exotic vVOLs (SAN/NAS) and on the other the fully integrated way via. vSAN (HCI) with special features on top.
A mandatory part for usage of storage in Tanzu is the proper configuration of a Storage Policy. Depending on the type of storage, you can utilize various adjustable policy-based features like IOPs limits.
Of course, you can create countless different storage policies and create your own schema to provide an exact fulfillment of your requirements. Besides, people like to call it Gold, Silver, and Bronze depending on the performance and availability demands.
Provisioning Storage for Tanzu Guest Cluster
The consumption of storage in Kubernetes is straightforward through the abstraction and automatic conversion of storage policies to storage classes.
Storage classes are what you consume in Kubernetes to provide your persistent volumes through persistent volume claims.
Actually, vSphere provides an effortless way to group workload clusters into vSphere Namespaces. The vSphere admin has full governance and furnishes these namespaces with the appropriate resources for the developer.
Besides access policies through vSphere single sign on (SSO) you also attach your storage policies to the vSphere namespaces, and you are ready to rock.
Maximum Integration with vSAN
Maximum integration and availability through awesome features that come with vSphere and vSAN 7 U3!
vSAN is now capable of supplying NFS and SMB file services in an easy and automated way. These file services now are fully integrated in vSphere with Tanzu. They provide read write many volumes (RWX) for container services.
This is a giant leap forward to make the life cushier for the vSphere admin and the developer. Different containers can read and write into the same persistent volume (PV).
Moreover, vSAN stretched cluster / fault domain functionality works for Kubernetes environments and is partially supported. VMware’s R&D is working heavily in the background, designing and providing new features as soon as possible.
Media, Resources, and Call to Action
Do you want to hear more? In September 2021, we launched our Podcast (German):
Thanks to the best colleagues in the world: Jan Philip Hoepfner, Daniel Rusche and many more from Medialine Group, to make that possible.
To sum up, Tanzu is one of our core topics, and we already got different episodes. More in planning and incoming. Every 14 days, a new episode guaranteed.
Finally, we appreciate your feedback, comments, and your thumbs up on our various platforms.
Furthermore, see me and my great fellow workers speaking at our free interface workshop in April 2022. Dresden, Berlin, and Remote attendance possible :).
vSphere 7 U2 was released on the 9th of March 2021 and brought a bunch of nice features for various use cases.
Soon after the release I could upgrade my lab vCenter 7 U1 to U2. Easy through the VAMI: https://vcenter.fqdn:5480. Then I deployed a couple of nested ESXi 7 U2 hosts and came in touch with the new surface.
If you beak the update down, improvements for vCenter, ESXi and vSAN emerge.
Now let’s start with an overview and proceed to my top features and their content.
Finally, five years after the release of vSphere 6 in 2015, the time has come for the next level of the first class enterprise hypervisor ESXi and it’s management environment VCenter Server. After several announcements, the new version 7.0 GA was released, public downloadable, at beginning of April 2020. Meanwhile there was a lot of time to stage the first upgrades and evaluate the new features. Read on and check out my insights…
In reference to my prior article vSAN – High Available Solution this article will give a glimpse overview of what is possibe with vSAN.
My favorite features:
Easy to install and manage. this kernel based solution requires that you have a vSphere environment already in place. Or you go the greenfield approach.
Scaleability – assign/remove single disks to/from diskgroups – go up to 64 hosts / cluster.
SPBM (Storage Policy Based Management) in place has a rich set of options assigneable to indiviual sets of VMs. Defines a redundancy with up to n+3. Furthermore, on top of it a strechted cluster is possible. A configurable IOPs limit keeps away the noisy neighbor.
In addition, Performance – literally – allflash comes with the smallest license.
Compression/Deduplication/Encryption is sometimes required and can be activated on vSAN datastore layer.
NVME – the super fast PCIe based technology is supported. Therefore simply use this kind of SSD for Read/Write caching.
Moreover, utilize x86 standard any vendor server hardware, no complex SAN is required for high availability. Compute, Storage and even Network … all in the box.
Small=Big – start with an allflash two node direct design. No expensive 10Gbit+ switches have to be attached.
Certainly, integrates in Multicloud and Automation – AWS features the vSphere stack.
In conclusion, this architecture is the answer to various kinds of requirements like availability, performance, manageability, security, recoverability, compliance. However well it depends.
I appreciate your top feature in the comments!
Next:
Subsequently, I will write about the “long walk” again. Attendance Mammutmarsch Berlin – 25.05.2019, 24h – 100km, confirmed.
After one pass and one fail (Mammutmarsch/Megamarsch Munich) the sails have been set for another exhausting “why in the hell do I do it again” event xD.
in reference to my prior article vSAN – High Available Solution this article will explain the basic architecture and give you some insight into the technical details.
got a new topic for you called VMWare vSAN which stands for virtual storage area network. This article will be an introduction in order to get a picture of the matter.
In the next part of this series I plan to write about all the features that come with it. Afterwards the architecture powered by an real life example will be explained. In the end I plan to really dive deep. A best practice series will follow which summarizes my experience and hopefully helps some desperate souls.
A different approach:
So to start, in the old days SAN was a very complex architecture which consists of storage arrays, network switches, fiber channel fabrics, (a redundant array of SAN switches) storage controllers, disk shelves and a lot of cabling. These silos had their own complex management and you needed a bunch of experts to implement the custom solution. In the end all that you got was some (high available) mass storage which you could access via different protocols like CIFS, NFS, iSCSI and FC over the Ethernet or Fibrechannel over a dedicated storage network. Mostly no quality of service and very expensive. Furthermore it was the era of spinning disks which were power consuming, high latency and error-prone.
Besides the “old” dedicated storage approach another solution was invented. Nowadays you get not only storage but the whole stack. Computing, network and storage – on just a set of conventional servers with local (flash) disks. All components are abstracted and managed by a software. The term – software defined data center or in our case – software defined storage – emerged. This package offers a new idea of an economical, scale-able, low latency, high performance, compact, easy to administrate, maintainable, secure, feature heavy and future-proof solution.
This solution is called hyper-converged infrastructure and familiar software/hardware vendors, which I got a boundary-point with, are: DELL/VMWare, Nutanix and Simplivity.
As mentioned above I will start with the features of vSAN 6.6 in the next blog article so stay tuned.