After nearly 8 years at NetApp I decided to leave and take up a pre-sales position with Tintri Inc in Australia/New Zealand, starting on the 17th February. (Now for those who don’t know Tintri I will cover this off later in the post).
During my time at NetApp I have made some life long friends and had the opportunity to work with some of the best people in the industry, but for me it was time to move on and seek out new challenges.
I am very excited about the opportunity and joining at a time when Tintri is starting out in ANZ. Lot’s to do in the first few weeks, including building a demo environment once my shiny new Tintri T620 arrives.
So, who are Tintri? If you’re a vExpert you will have probably received a new Polo from them every year.
Tintri started out in 2008 in Mountain View, CA with the vision of creating a storage platform that was able to provide a better synergy between virtualisation and storage.
Using the latest advancements in flash, processing and networking they came up with ‘VMStore’ which is still the industry’s only VM-Aware storage appliance for VMware (with KVM and Hyper-V to follow soon).
Being both a virtualisation and storage guy this is a very attractive proposition to me.
Having come from NetApp and worked with other storage vendors in the past (including EMC) one of the main challenges is designing, configuring and tuning traditional storage arrays for virtual environments. If you think about it, these arrays were designed for a very different world where workloads were hosted on physical servers. Now, I’m not saying they don’t work, they just need a lot more management, tuning & tweaking to handle the ever increasing virtual workloads.
Pretty much all storage arrays see the world in terms of Volumes and LUN’s and have no visibility or awareness of the workloads being placed on them, it’s just IO. This is where the disconnection is. How are you meant to get the best performance and capacity utilisation if you have no real visibility of what the VM’s and applications are doing?
Storage should understand and operate at the VM level.
As a storage guy I have spent many hours thinking about how I lay out my data stores (LUN’s or Volumes) on the available storage I have. NetApp makes it a little easier as you don’t need to think about RAID groups, but it’s still complex regardless.
The bottom line is that virtualisation has changed the way data is managed.
The Tintri VMstore is a VM-Aware Storage Architecture. This means it understands and operates on VMs and virtual disks — instead of conventional storage objects such as volumes and LUNs. All storage operations such as snapshots, clones, and replication are done at VM level, which eliminates the need to deal with underlying the complexity of traditional storage.
Where has all the storage complexity gone? Simple. The VMstore is presented to the hypervisor as a single large NFS datastore (currently up to 33.5TB usable per unit). If you need more, just add a new VMstore and away you go. No more LUN’s, RAID configuration etc etc.
The functionality underlying Tintri VMstore™ can be categorised in three sections:
1. Storage intelligence: Delivering performance, density and scalability without the complexity.
2. Infrastructure insight: Delivering a complete picture of virtualised workloads.
3. VM control: Delivering VM-granular data management and automation.
Tintri VMstore’s approach automatically ensures every VM gets the performance it needs. Expanding storage is simple, as each VMstore appliance appears as an additional, high-capacity datastore in VMware vCenter. This makes it easy to scale and manage each node as part of a VMware Storage DRS cluster and eliminates any risk of downtime.
Lets see how:
Tintri FlashFirst™ design: VMstore is a hybrid storage solution. It uses a combination of flash-based solid-state devices (SSDs) and high-capacity disk drives for storage. Tintri’s patented FlashFirst design incorporates algorithms for inline deduplication, compression and working set analysis to service more than 99 percent of all IO from flash for very high levels of throughput and consistent sub-millisecond latencies for both read
and write operations.
Flash-first design minimises swap between SSD and HDD by leveraging data reduction in the form of deduplication and compression to increase the amount of data that can be stored on flash. This is complemented by detailed profiling of all the active VM IO to ensure metadata and active data are kept in high performance flash. Only cold data is evicted to disk, which does not impact application performance. It takes advantage of the fact that each VM has an active working set, which is a fraction of the overall VM. Using a flash-only approach means all data must be stored on high performance (and expensive) flash, whether it needs to be there or not.
Unlike flash-only products, 100 percent of the operational flash capacity on a Tintri VMstore can be used without concern about running out of space and having applications come to a screeching halt. In addition, the Tintri VMstore is operationally far simpler and more cost-effective than flash-only products.
Traditional storage systems often incorporate flash to an existing disk-based architecture, using it as a cache or bolt-on tier, while continuing to use disk IO as part of the basic data path. In comparison, VMstore services 99 percent IO requests directly from flash, thereby achieving dramatically lower flash-level latencies, while delivering the cost advantages of disk storage.
Tintri’s innovative FlashFirst design addresses MLC flash problems that previously made it unsuitable for enterprise environments: Flash suffers from high levels of write amplification due to the asymmetry between the size of blocks being written and the size of erasure blocks for flash. Unchecked, this reduces random write throughput by more than 100 times, introduces latency spikes and dramatically reduces flash lifetime. FlashFirst design uses a variety of techniques including deduplication, compression, analysis of IO, wear leveling, garbage collection algorithms and SMART (Self-Monitoring, Analysis and Reporting Technology) monitoring of flash devices and dual parity RAID-6 to handle write amplification, ensure longevity and safeguard against failures.
VM QoS: Tintri VMstore is designed to support a mixed workload of hundreds of VMs each with a unique IO configuration. Additionally, as volumes of traffic ebb and flow, through its FlashFirst design analyses and tracks the IO for each VM, delivering consistent performance where it is needed. This enables it to isolate the VMs, queue and allocate critical system resources such as networking, flash/SSDs and system processing to individual VMs. The QoS capability is complementary to VMware’s performance management capability.
The result is consistent performance where it is needed. And, all of VM QoS functionality is transparent, so there is no need to manually tune the array or perform any administrative touch.
QoS is critical when storage must support high performance databases generating plenty of IO alongside latency sensitive virtual desktops. This is commonly referred to as the noisy neighbour problem in traditional storage architectures that are flash-only and lack VM-granular QoS. Tintri VMstore ensures database IO does not starve the virtual desktops, making it possible to have thousands of VMs served from the same storage system.
Scaling storage: Scaling storage beyond the performance and capacity of a single system is as simple as adding another VMstore system – a task that takes less than two minutes. This building block approach effectively adds another datastore that can be managed by the virtualisation layer. To tackle the challenge of managing individual VMstore systems, Tintri has created Tintri Global Center. Build on a solid architectural foundation is capable of supporting over one million VMs, Tintri Global Center is an intuitive centralised control platform that lets administrators monitor and administer multiple VMstore systems as one.
When I logged into the management UI for the first time I have to say I was very surprised (in a good way). It’s very clean and uncluttered. A great deal of thought has gone into it’s design and layout.
Essentially the Dashboard is designed to help you draw quick conclusions about your VMstore’s health, identify problems, and help you make informed resource management decisions. Dashboard numbers are refreshed every 10 seconds, and performance reserves and space changers are refreshed every 10 minutes.
Space and Performance reserves changers are VMs that experience the largest change in reserves & space within the last week.
You can drill down on the VM, examine its historic or real time trends, and correlate the time of any I/O spikes to suspicious activities on the business application.
In addition, when the datastore is unexpectedly running low on space, refer to the space changers list to find potential culprits.
As I have mentioned before, traditional storage systems provide a performance view from the LUN, volume or file-system standpoint, but cannot isolate VM performance or provide insight into VM-level performance characteristics.
It is difficult for administrators to understand situations such as the impact of a new VM workload, without access to relevant VM performance metrics. In addition, identifying the cause of performance bottlenecks is a time consuming, frustrating and sometimes inconclusive process that requires iteratively gathering data, analysing the data to form a hypothesis and then testing the hypothesis. In large enterprises, this process often involves coordination between several people and departments, typically spanning many days or even weeks.
Tintri provides a complete, comprehensive view of VMs including end-to-end tracking and visualisation of performance across the entire data center infrastructure. This ensures that administrators can procure the critical statistics they need. The goal is ensuring storage performance stays at acceptable levels with minimal latency.
Tintri VMstore monitors every IO request at the vdisk and VM level and can determine if latency is being incurred at the hypervisor, network, or storage levels. For each VM and vdisk stored on the system, enterprise IT teams can use VMstore to instantly visualise where potential performance issues may exist, including on the host, network or storage. The latency statistics are displayed in an intuitive forma. In an instant, administrators can see the bottleneck rather than trying to deduce where it is from indirect measurements and time-consuming detective work.
The hypervisor latencies are obtained using vCenter APIs, while the network, file system and disk latencies are provided by Tintri VMstore, which knows the identity of the corresponding VM for each IO request.
Administrators can detect trends with this data from the VMstore and individual VMs, all without the added complexity of installing and maintaining separate software. This built-in insight can reduce costs and simplify planning activities — especially around virtualising IO-intensive critical applications and end-user desktops.
Tintri VMstore allows all data management operations — snapshots, clones and replication — to operate at the VM level. This enables managing large-scale virtual environments to the vdisk level with complete, granular control. I will cover more about each of these areas in a further blog post soon.
In addition all VAAI capabilities for NFS are fully supported and a management console for the vSphere Web client will be available very soon.