In follow up to my colleague’s recent blog (link here) I thought I would add a few words on the subject……….
In a Converged Infrastructure stack storage is one of the main areas that is subject to change and has the greatest impact on the ability to run workloads.
As mentioned in the previous blog post, CI’s were developed to help simplify a complex solution stack. How so? Once it’s built you still need people who understand all levels and be an expert in Compute, Network and Storage. Let me tell you from experience, managing a NetApp platform running Clustered ONTAP is not a simple thing (don’t mean to single NetApp out here, but this was my background for many years).
Having Reference Architecture is great, but it is only the beginning. How do you adapt your infrastructure as workloads and business requirements change?
If you are a service provider or offer a Private Cloud service to the business, how does your CI need to change to take on additional workloads or when existing workloads change?
This results in having to re-design compute and storage. Compute is more simplistic as you can simply add more blades or servers, however storage is not so easy as it’s been specifically designed around know quantities (IOP’s, throughput etc.), which translates into spindles, RAID groups, Volumes, LUN’s and datastores etc. Take a look at a typical storage layout for a VDI solution in the diagram below.
As an administrator how do you know what storage resource is available to run additional VM workloads and how do you get this information?
If a workload is seeing poor performance, how to you go about troubleshooting and isolating the issue?
If I need to add more workload can I without impacting other VM’s?
How many different UI’s and tools would you need from NetApp and EMC to answer the above two questions?
So, how we make a Converged Infrastructure really live up to the expectations it sells? Make it simple. By this I don’t just mean having a blueprint document to follow that builds something. It needs to be truly simple to deploy and manage.
Make the stack easy for a general IT administrator to manage without needing any in-depth storage skills.
This is where Tintri shines. It eliminates the need to follow any special best practices. With all the advanced functionality comes the self-learning technology that implements and allocates resources based on the requirements. So one doesn’t have to the pre-work in terms of design.
All you have to figure out is the capacity. Once done, the storage can almost be ignored. After deployment it continues to provide customers with deep analytics across the infrastructure-helping make the right decisions faster.
Unlike reference architecture, it is dynamic and adjusts to the needs of the applications and VMs, therefore maintaining the benefits across the lifecycle of the deployment. It is self-tuning and has all the REST-APIs in case customers want to bring in some automation. The automation that can be achieved using Tintri REST-APIs is unmatched given the kind of analytics that are available at customer’s disposal.
So, I would conclude it by stating that Reference Architectures and Converged Infrastructures are great but what’s important is the constituents of the Converged Infrastructure and how they simplify not only the initial deployment but ongoing operations as well.