Overview of Converged Infrastructure

The IT landscape is dramatically changing. What was once available only to the largest webscale companies, like Facebook and Google, is now available to any organization looking to implement a scalable, tunable, auto-optimized converged infrastructure.

What’s made this possible is a dramatic new software architecture that delivers an elastic converged infrastructure delivering fast deployment, simple management, pay-as-you-scale flexibility, optimal price/performance and low risk.

 

Converged Elastic Infrastructure Purpose-Built for Hyper-V

Gridstore Converged Infrastructure Includes:

  • HyperConverged Appliances
    Gridstore’s single system that includes both compute and storage. Each appliance has up to four compute/storage nodes, and can be expanded with additional Appliances and/or Storage Nodes up to 250 total nodes. Both all-flash and hybrid versions are available.
  • Compute Nodes
    Any Microsoft Windows Servers with or without Hyper-V to which are added the Gridstore vController, a software driver that interfaces with Gridstore Storage Nodes and provides simple management and end-to-end control of IO from server to storage, eliminating the “IO blender effect” and allowing performance to scale linearly with capacity.
  • Storage Nodes
    Gridstore offers both Hybrid Storage Nodes and Capacity Storage Nodes that scale from 12TB to 12PB with up to 250 nodes. The Hybrid Storage Nodes include PCIe Flash for caching to deliver high performance. These work with both Compute Nodes and HyperConverged Appliances.

What Makes Gridstore Different


Native
Hyper-V
Integration
POWER ON AND PROVISION
Gridstore is the only converged architecture that offers native Windows kernel integration. Gridstore vController is a software driver that installs at the Windows kernel level so there is no guest-VM complexity, and provides the tightest integration possible with Windows management and functionality.
Screen Shot 2014-09-25 at 5.21.24 AM

End to End
I/O Control
per VM
PREDICTABLE PERFORMANCE PER VM
Only Gridstore delivers end-to-end per VM IO Control. Granular visibility and control with QoS per VM enables precision IO for each VM based on user-defined priority. No more noisy neighbors, always top performance for the most critical VMs.
Screen Shot 2014-09-25 at 5.22.23 AM

Efficient
Elastic
Scaling
50% LOWER TCO
Gridstore allows you to pay-as-you-grow, while reducing TCO by up to 50%. We’ve eliminated the need for 3-way replicas, improving efficiency and eliminating waste. Gridstore uses erasure coding providing 50% better CPU utilization and 50% lower TCO through eliminating replicas. With up to 250 Nodes per pool, performance continues to increase linearly as you add nodes.
Screen Shot 2014-09-25 at 5.23.11 AM

End to End
Flash
Architecture
FLASH PERFORMANCE
Gridstore is architected for end-to-end flash performance. With both read and write cache positioned optimally to provide the lowest latency. Also, with Gridstore there are no 3-way replicas that waste I/O performance and storage capacity.
Screen Shot 2014-09-25 at 5.23.55 AM

Independent
Scaling
SCALE TO FIT YOUR NEEDS
With Gridstore, unlike other converged infrastructure vendors, you can scale compute and storage together OR you can scale them independently. Add more HyperConverged Appliances, or Windows servers,  or Hybrid / Capacity Storage Nodes. Maximum flexibility to fit into your existing environment or standalone
Screen Shot 2014-09-25 at 5.25.21 AM

 

Gridstore HyperConverged Appliance

A Gridstore HyperConverged Appliance delivers tunable, auto-optimized compute and storage resources together in a single system. The Gridstore HyperConverged Appliances improve operational and application performance yet save on both OpEx and CapEx, and offer customers a simple, easy-to-deploy system. HyperConverged Appliances are based on standard X86 servers, and come in two versions:

All-Flash HyperConverged Appliance:  The all-flash appliance provides 5TB of flash capacity and 1TB of flash cache per node to deliver highly deterministic performance for applications such as low-latency VDI and tier-one workloads.  Each appliance includes up to four compute/storage nodes:

All-Flash
2U Appliance – Up to 4 Nodes (Computer | Storage):

  • Hot-Swap Intel Servers
  • 24 Hot-Swap SSDs, 2 Shared PSUs
Each Node (Compute)

  • Dual Intel Xeon E5-2690 (20 cores, 50MB cache)
  • 256GB RAM, 2x10GbE
Each Node (Storage)

  • 5TB SSD Capacity + 1TB SSD Cache (20%)
Each Node (Software)

  • WS 2012 R2 Data Center + Gridstore

Hybrid HyperConverged Appliance: The hybrid appliance includes flash for cache and SATA drives for capacity. Each appliance includes up to four compute/storage nodes:

Hybrid Nodes
2U Appliance – 4 Nodes (Compute | Storage):

  • Hot-Swap Intel Servers
  • 4 Hot-Swap SSDs + 8 Hot-Swap HDDs
  • 2 Shared PSUs
Each Node (Compute)

  • Dual Intel Xeon E5-2690 (20 cores, 50MB cache)
  • 256GB RAM, 2x10GbE
Each Node (Storage)

  • 8TB HDD Capacity + 1TB SSD Cache (12.5%)
Each Node (Software)

  • WS 2012 R2 Data Center + Gridstore

HyperConverged Appliances are particularly well suited to certain crucial contemporary IT challenges, a few examples include:

  • VDI: where each HyperConverged Appliance can support up to 600 desktops in a typical deployment. Since VDI scales in uniform chunks based on the number of desktops being added, a hyper-converged solution is a fast and easy way to scale these deployments.
  • Hyper-V Initial deployment: where a HyperConverged Appliance can be installed and up and running the same day ready for application deployment or migration from existing environments.
  • Remote Office/Branch Office (ROBO): deployment of a complete compute and storage solution in a box for those remote or branch offices that need turnkey, easy to deploy and manage systems.

Storage Nodes

Gridstore delivers its solution via either Hybrid or Capacity Storage Nodes, ranging from 4TB per node to 48TB.

High Performance IOPS with Hybrid Storage Nodes

Gridstore’s Hybrid Storage Nodes deliver high performance through the addition of a write-back cache using a PCIe card with over 500GB of Flash and leveraging 2 x 10 GbE connections (or 4 x 1 GbE NICs) to accelerate I/O even further. These Hybrid storage nodes include both the PCIe Flash and SATA drive capacity for both high performance and the capacity you need. Click here for a detailed specifications.

Designed for high performance I/O requirements, Hybrid nodes are ideal for mixed load environments – multi-tenancy/many VM’s in an environment.

 

Learn more about Hybrid Storage Nodes

  • Exchange or SQL Server databases
  • Mixed workloads
  • Hyper-V (server virtualization)

 


Capacity Storage Nodes

Gridstore’s Capacity Storage nodes allow mid-size and large enterprise businesses to easily deal with rapidly growing capacity needs. Designed for bulk storage or less demanding I/O applications, these nodes are ideal for:

  • VM backup, archive
  • Disaster recovery
  • SMB3 file shares, Sharepoint

Learn more about Capacity Storage Nodes

A Gridstore solution includes three storage nodes at a minimum, and can scale to 250. You can start with as little as 4TB per node or as much as 48TB in a node, and scale up to 12PB, on a node-by-node basis.


Architecture

Problem: Virtualization Breaks Traditional Storage

For over 60% of enterprises surveyed1, the number one issue they face is poor application performance in virtualized environments.

The problem at hand is an architectural mismatch between traditional storage and virtualization. In the world of physical servers, our standard practice of ganging up a set of disk drives (of the right type/speed) into a RAID set and thus create a LUN has served us very well for decades. This LUN was created recognizing the type of application it was to serve and it was associated with all the appropriate storage services (replication, compression, snapshot, etc.) that the application warranted, based on its importance to the enterprise. All well and good. But then we started servicing several applications from the same LUN and even then, unless we overdid it, or the applications were highly erratic, the LUN was able to serve multiple applications. If an application was important enough it got its own LUN and associated services, even if the utilization of either capacity or performance were sometimes less than ideal.2

But then we entered the era of server virtualization and all hell broke loose. One or a few LUNs serving a multitude of VMs, maybe even several hosts, each with 10s of VMs, or more, representing a variety of applications/workloads, just simply couldn’t cut it. The infamous I/O Blender affect is now well understood. The precisely tuned LUN of yesteryear was wrestling with totally random I/O, coming in a mishmash from a large number of VMs, overwhelming the storage controllers and bringing performance of all applications to their knees.

Serious problems produced by the architectural mismatch include:

  • Blended I/O causing performance loss for applications
  • Lack of visibility into the I/O path making it nearly impossible to resolve performance problems
  • All VMs being treated equally making it impossible to prioritize the I/O from one VM over another

Gridstore solves the problem by aligning the storage architecture with virtualization using a patented Server-Side Virtual Controller Technology.

The solution to the problem caused by this architectural mismatch is simple. Design storage using the same principles of virtualization to reestablish the 1:1 relationship between a virtual server and its underlying storage while managing the storage functionality on a per VM basis rather than a LUN, and preserving the capabilities that make virtualization so beneficial. Gridstore achieves this through a patented Server-Side Virtual Controller Technology (SVCT) which follows the same pattern of virtualization. Architectural differentiators of SVCT include:

Virtualized Storage Resource Pool (vPool) – A vPool is a pool of storage resources such as IOPS, bandwidth, and capacity that is expanded as storage nodes are added to the network similar to the way that server resources such as CPU and memory are pooled into a shared resource pool. With each storage node added, their resources are virtualized and added to the vPool.

Virtual Storage Controller (vController™) – Unlike traditional controllers that operate inside the array, a vController transparently presents a local SCSI device to the hypervisor and operates inside each hypervisor. There is no change to the hypervisor or guest VMs. By operating inside the hypervisor, the vController can isolate I/O from each of the VMs and channel this through a set virtual storage stacks that map back to each VM. This reestablishes the 1:1 relationship between a virtual machine and its underlying storage (a virtual storage stack) all the way from the VM through to storage resources in a vPool.

Gridstore vController enables storage I/O to be isolated, optimized and prioritized per VM. Distributed vControllers and pools of virtualized storage resources (vPools) allow for simple scaling that takes advantage of massively parallel processing to offer performance that scales linearly as resources are added to the pool. Using vPools and vControllers together provide three core storage capabilities:

Optimized Virtual Storage (vmOptimized™ Storage) – SVCT creates a virtual storage stack for each VM. A virtual storage stack isolates and dynamically optimizes the I/O for each VM. This optimization is real-time and dynamic without the complexity of manual storage tuning for every application. By isolating I/O for each VM before it leaves the host, a vController creates the optimal I/O flow for all VMs. vmOptimized storage also provides end-to-end visibility into each VMs storage stack. If there is an issue with a particular VM, you can detect it and fix it without spending hours taking guesses at what is causing the problems.

Quality of Service (TrueQoS™) – SVCT delivers the industry’s first end-to-end storage QoS that works on a per VM level. Unlike array based QoS that operates on LUNs and only prioritizes I/O after it has reached the controller, a vController prioritizes I/O per VM before it leaves the host. With LUN based QoS operating in the array, it is simply too late to be effective if there is a bottleneck building at the controller. And with the potential for hundreds of VMs to store VHDs in a single LUN, there is no way to differentiate and prioritize I/O between VMs. TrueQoS provides the granularity of I/O per VM and prioritizes I/O on both sides of the network – in the vController and on the storage resources. By operating at both ends of the network pipe, TrueQoS prioritizes I/O end-to-end for each VM regardless of which host it operates on. TrueQoS is only possible by first moving vControllers across the network to the source of the I/O. SVCT leverages an intelligent grid fabric to create virtual storage stacks for each VM. The resources available to each of the storage stacks can be dialed up or down with fine grained control. To simplify deployment of TrueQoS, it can be applied to groups of VMs through a policy. When VMs are created, they can be added to one of these groups to receive higher or lower priority. Alternatively, individual VMs can have either IOPS Limits (Max) or Reserves (Min) to enable fine-grained control per VM that is guaranteed to be delivered.

Parallel Performance Scaling (GridScale™) – SVCT offers another major innovation designed to accelerate I/O and simplify scaling storage – GridScale. Unlike clustered storage that replicates I/O between members in the cluster, GridScale uses Direct I/O from vControllers in the host direct to underlying storage resources in the grid. The size of most commercial clusters ranges from as small as 4 to 32 nodes for a large cluster. This is due to the fact that performance of clusters bottleneck as the number of nodes grow because replicated data on the backplane grows geometrically faster than the inbound data on the front-side network. This property limits the scaling of a cluster, wastes significant IOPS impacting performance and adds latency with the extra hops of replicating data between nodes.

GridScale uses Direct I/O to generate massively parallel performance scaling. SVCT eliminates the need for clustering because SVCT protects data before it leaves the server and writes parallel stripes of encoded data (slices of the original I/O) directly across a number of storage nodes. Data is protected to the same N+2 level (same as two full replicas) and is written in parallel to N storage nodes with zero inter-node traffic. Direct I/O gains performance through parallel network I/O while 100% of a storage nodes IOPS are utilized for primary I/O (there is no replica). The result is the more nodes you add to a grid, the more bandwidth available for I/O and the more IOPS available – without any being wasted on replicas.

By eliminating the scaling constraints of a cluster, GridScale allows you to seamlessly scale from 3 nodes to 250 storage nodes per pool – incrementally one node at a time. Each node adds its full capacity, IOPS and bandwidth to the pool, with zero wasted.

1. Top-Level Findings: Storage Acceleration and Performance Multi-Client Study, Taneja Group, March 2013

2. SearchStorage.techtarget.com, “Goodbye LUN technology, you served us well” by Arun Taneja