The IT landscape is dramatically changing. What was once available only to the largest webscale companies, like Facebook and Google, is now available to any organization looking to implement a simple, affordable, and scalable hyper-converged infrastructure.
What’s made this possible is a dramatic new software architecture for an elastic infrastructure that delivers fast deployment, simple management, pay-as-you-scale flexibility, optimal price/performance and low risk.
Hyper-Converged Elastic Infrastructure Purpose-Built for Microsoft Workloads and the Private Cloud
Gridstore Hyper-Converged Infrastructure Includes:
- HyperConverged Appliances
Gridstore’s 2U Appliance includes both compute and storage. Each appliance has up to four compute/storage nodes, and can be expanded with additional Appliances and/or Storage Nodes up to 256 total nodes. Both all-flash and hybrid versions are available.
- Windows Servers
Any Microsoft Windows Servers with or without Hyper-V to which are added the Gridstore vController, a software driver that interfaces with Gridstore Storage Nodes and provides simple management and end-to-end control of IO from server to storage, eliminating the “IO blender effect” and allowing performance to scale linearly with capacity.
- Storage Nodes
Gridstore offers All-flash, Hybrid Storage Nodes that scale starting at 4TB with up to 256 nodes. The Hybrid Storage Nodes include PCIe Flash for caching to deliver high performance. These work with both Windows Servers and HyperConverged Appliances.
What Makes Gridstore Different
Gridstore has solved the hard problem of delivering predictable performance in a virtual environment, something that the first generation Converged and Hyper-Converged systems have not been able to do. By doing this with its unique software architecture, Gridstore is able to deliver an all flash solution at an affordable price.
End to End
End to End
Gridstore HyperConverged Appliance
A Gridstore HyperConverged Appliance delivers tunable, auto-optimized compute and storage resources together in a single system. The Gridstore HyperConverged Appliances improve operational and application performance yet save on both OpEx and CapEx, and offer customers a simple, easy-to-deploy system. HyperConverged Appliances are based on standard X86 servers, and come in two versions:
All-Flash HyperConverged Appliance: The all-flash appliance provides up to 5TB of flash capacity and 1TB of flash cache per node to deliver highly deterministic performance for applications such as low-latency VDI and tier-one workloads. Each appliance includes up to four compute/storage nodes:
- Hot-Swap Intel Servers
- 24 Hot-Swap SSDs, 2 Shared PSUs
- Dual Intel Xeon E5-2690 (24 cores, 50MB cache)
- up to 256GB RAM, 2x10GbE
- up to 6TB SSD Capacity
- WS 2012 R2 Data Center + Gridstore
Hybrid HyperConverged Appliance: The hybrid appliance includes flash for cache and SATA drives for capacity. Each appliance includes up to four compute/storage nodes:
- Hot-Swap Intel Servers
- 4 Hot-Swap SSDs + 20 Hot-Swap HDDs
- 2 Shared PSUs
- Dual Intel Xeon E5-2690 (24cores, 50MB cache)
- Up to 256GB RAM, 2x10GbE
- 5TB HDD Capacity + 1TB SSD Cache
- WS 2012 R2 Data Center + Gridstore
HyperConverged Appliances are particularly well suited to certain crucial contemporary IT challenges, a few examples include:
- VDI: where each HyperConverged Appliance can support up to 600 desktops in a typical deployment. Since VDI scales in uniform chunks based on the number of desktops being added, a hyper-converged solution is a fast and easy way to scale these deployments.
- Hyper-V Initial deployment: where a HyperConverged Appliance can be installed and up and running the same day ready for application deployment or migration from existing environments.
- Remote Office/Branch Office (ROBO): deployment of a complete compute and storage solution in a box for those remote or branch offices that need turnkey, easy to deploy and manage systems.
Gridstore storage nodes start at 4TB per node.
Price Performance with Hybrid Storage Nodes
Gridstore’s Hybrid Storage Nodes deliver solide performance through the addition of a write-back cache using a PCIe card with over 500GB of Flash and leveraging 2 x 10 GbE connections (or 4 x 1 GbE NICs) to accelerate I/O even further. These Hybrid storage nodes include both the PCIe Flash and SATA drive capacity for both solid performance and the capacity you need. Click here for a detailed specifications.
Designed for high performance I/O requirements, Hybrid nodes are ideal for mixed load environments – multi-tenancy/many VM’s in an environment.
- Exchange or SQL Server databases
- Mixed workloads
- Hyper-V (server virtualization)
A Gridstore solution includes three storage nodes at a minimum, and can scale to 256. You can start with as little as 4TB per node and scale on a node-by-node basis.
Problem: Virtualization Breaks Traditional Storage
For over 60% of enterprises surveyed1, the number one issue they face is poor application performance in virtualized environments.
The problem at hand is an architectural mismatch between traditional storage and virtualization. In the world of physical servers, our standard practice of ganging up a set of disk drives (of the right type/speed) into a RAID set and thus create a LUN has served us very well for decades. This LUN was created recognizing the type of application it was to serve and it was associated with all the appropriate storage services (replication, compression, snapshot, etc.) that the application warranted, based on its importance to the enterprise. All well and good. But then we started servicing several applications from the same LUN and even then, unless we overdid it, or the applications were highly erratic, the LUN was able to serve multiple applications. If an application was important enough it got its own LUN and associated services, even if the utilization of either capacity or performance were sometimes less than ideal.2
But then we entered the era of server virtualization and all hell broke loose. One or a few LUNs serving a multitude of VMs, maybe even several hosts, each with 10s of VMs, or more, representing a variety of applications/workloads, just simply couldn’t cut it. The infamous I/O Blender affect is now well understood. The precisely tuned LUN of yesteryear was wrestling with totally random I/O, coming in a mishmash from a large number of VMs, overwhelming the storage controllers and bringing performance of all applications to their knees.
Serious problems produced by the architectural mismatch include:
- Blended I/O causing performance loss for applications
- Lack of visibility into the I/O path making it nearly impossible to resolve performance problems
- All VMs being treated equally making it impossible to prioritize the I/O from one VM over another
Gridstore solves the problem by aligning the storage architecture with virtualization using a patented Server-Side Virtual Controller Technology.
The solution to the problem caused by this architectural mismatch is simple. Design storage using the same principles of virtualization to reestablish the 1:1 relationship between a virtual server and its underlying storage while managing the storage functionality on a per VM basis rather than a LUN, and preserving the capabilities that make virtualization so beneficial. Gridstore achieves this through a patented Server-Side Virtual Controller Technology (SVCT) which follows the same pattern of virtualization. Architectural differentiators of SVCT include:
Virtualized Storage Resource Pool (vPool) – A vPool is a pool of storage resources such as IOPS, bandwidth, and capacity that is expanded as storage nodes are added to the network similar to the way that server resources such as CPU and memory are pooled into a shared resource pool. With each storage node added, their resources are virtualized and added to the vPool.
Virtual Storage Controller (vController™) – Unlike traditional controllers that operate inside the array, a vController transparently presents a local SCSI device to the hypervisor and operates inside each hypervisor. There is no change to the hypervisor or guest VMs. By operating inside the hypervisor, the vController can isolate I/O from each of the VMs and channel this through a set virtual storage stacks that map back to each VM. This reestablishes the 1:1 relationship between a virtual machine and its underlying storage (a virtual storage stack) all the way from the VM through to storage resources in a vPool.
Gridstore vController enables storage I/O to be isolated, optimized and prioritized per VM. Distributed vControllers and pools of virtualized storage resources (vPools) allow for simple scaling that takes advantage of massively parallel processing to offer performance that scales linearly as resources are added to the pool. Using vPools and vControllers together provide three core storage capabilities:
Optimized Virtual Storage (vmOptimized™ Storage) – SVCT creates a virtual storage stack for each VM. A virtual storage stack isolates and dynamically optimizes the I/O for each VM. This optimization is real-time and dynamic without the complexity of manual storage tuning for every application. By isolating I/O for each VM before it leaves the host, a vController creates the optimal I/O flow for all VMs. vmOptimized storage also provides end-to-end visibility into each VMs storage stack. If there is an issue with a particular VM, you can detect it and fix it without spending hours taking guesses at what is causing the problems.
Quality of Service (TrueQoS™) – SVCT delivers the industry’s first end-to-end storage QoS that works on a per VM level. Unlike array based QoS that operates on LUNs and only prioritizes I/O after it has reached the controller, a vController prioritizes I/O per VM before it leaves the host. With LUN based QoS operating in the array, it is simply too late to be effective if there is a bottleneck building at the controller. And with the potential for hundreds of VMs to store VHDs in a single LUN, there is no way to differentiate and prioritize I/O between VMs. TrueQoS provides the granularity of I/O per VM and prioritizes I/O on both sides of the network – in the vController and on the storage resources. By operating at both ends of the network pipe, TrueQoS prioritizes I/O end-to-end for each VM regardless of which host it operates on. TrueQoS is only possible by first moving vControllers across the network to the source of the I/O. SVCT leverages an intelligent grid fabric to create virtual storage stacks for each VM. The resources available to each of the storage stacks can be dialed up or down with fine grained control. To simplify deployment of TrueQoS, it can be applied to groups of VMs through a policy. When VMs are created, they can be added to one of these groups to receive higher or lower priority. Alternatively, individual VMs can have either IOPS Limits (Max) or Reserves (Min) to enable fine-grained control per VM that is guaranteed to be delivered.
Parallel Performance Scaling (GridScale™) – SVCT offers another major innovation designed to accelerate I/O and simplify scaling storage – GridScale. Unlike clustered storage that replicates I/O between members in the cluster, GridScale uses Direct I/O from vControllers in the host direct to underlying storage resources in the grid. The size of most commercial clusters ranges from as small as 4 to 32 nodes for a large cluster. This is due to the fact that performance of clusters bottleneck as the number of nodes grow because replicated data on the backplane grows geometrically faster than the inbound data on the front-side network. This property limits the scaling of a cluster, wastes significant IOPS impacting performance and adds latency with the extra hops of replicating data between nodes.
GridScale uses Direct I/O to generate massively parallel performance scaling. SVCT eliminates the need for clustering because SVCT protects data before it leaves the server and writes parallel stripes of encoded data (slices of the original I/O) directly across a number of storage nodes. Data is protected to the same N+2 level (same as two full replicas) and is written in parallel to N storage nodes with zero inter-node traffic. Direct I/O gains performance through parallel network I/O while 100% of a storage nodes IOPS are utilized for primary I/O (there is no replica). The result is the more nodes you add to a grid, the more bandwidth available for I/O and the more IOPS available – without any being wasted on replicas.
By eliminating the scaling constraints of a cluster, GridScale allows you to seamlessly scale from 3 nodes to 250 storage nodes per pool – incrementally one node at a time. Each node adds its full capacity, IOPS and bandwidth to the pool, with zero wasted.
1. Top-Level Findings: Storage Acceleration and Performance Multi-Client Study, Taneja Group, March 2013
2. SearchStorage.techtarget.com, “Goodbye LUN technology, you served us well” by Arun Taneja