This post will cover the hardware in this innovation, so let’s start, There are two models in X2

Configuration X-Brick

Minimum Raw

X-Brick

Maximum Raw

Cluster Size in X-Bricks
X2-S 7.2 TB 28.8 TB Up to 4
X2-R 34.5 TB 138.2 TB Up to 8

X2-S is utilizing 400GB SSD drives and can scale to 4 X-Brick and X2R will be utilizing 1.92TB drives (larger drives will be supported later on) and scan scale to 8 X-Bricks. Question #1, why are we still utilizing 400GB drives in the X2S model and why are we only scaling this model to 4 X-Bricks? The answer is simple, there are customers out there who do NOT need a lot of physical capacity but rather needs a lot of LOGICAL capacity that can be gained by high deduplication (think more than 5:1), for example, the full clones VDI customers, no need for you the customer to pay for extra capacity that you will not use and we want you to have the best TCO out there..why 4 X-Bricks? Because these customers tend to build VDI PODS, each “pod” is a fault domain consist of server, network and storage, based on our past experience, these customers will not use a fully populated 8 X-Bricks cluster (aka, “The Beast”)

X2-R is utilizing 1.92TB drives and can scale to a fully populated 8 X-Bricks cluster, this model is the one that will be most utilized out there.

Regardless to the model, each DAE can now scale to 72 drives! As oppose to the 25 drives X1 accommodated, this is a diagram of the new DAE

This is a real world image on how you pull out the DAE, super easy and to the right, you can see a DAE fully populated with 72 drives

Its important to note that you do NOT need to fully populate the DAE on day1 (or day2), you start with 18 drives and you then scale with a pack of 6 drives. For example, you bought a single X2 X-Brick with 36 drives, you now want to scale UP (a new feature to X2, in X1 you could only scale out and now you can either scale up, out or both), you simply add another 6 drives, the procedure is fully non-disruptive.

Below you can see a demo of how the new scaling-up feature will work

PCIe NV-RAM

We have removed the battery backup units (BBU) and instead are now using NVRAM PCIe cards reside in each of the storage controllers, these are used in case of a power outage to destage the data from the storage controllers RAM to the drives and again, the question is why?

  • Increased reliability, we had some issues where the BBUs in X1 weren’t connected properly, with NVRAM, this is now user error prone and even if you take out the power cable directly from the storage controller , the card will de-stage all the data until the power has been restored.
  • Reduced service calls (battery replacement)
  • Improved density , we are now saving 1-2RU per an X-Brick
  • Reduced cabling
  • Reduced complexity
  • Allows odd X-Brick support, yes, X2 will support odd configurations (3,5,7) where in X1, you could grow from 1 -> 2 -> 4 -> 6 ->8 X-Bricks only.

    X1

    X2


  • IB switch

Due to the amazing inter-node performance, we had to increase the infiniBand switches performance (the ones that are connecting all the X-Bricks to act and behave as one entity, e.g, our Scale-OUT value proposition) to 2 X FDR 56Gbps from the 2 X 40Gbps we used in X1, the switches are Mellanox based and as in the past, are unmanaged so it’s not something you, the customer, the partner etc need to manage.

X2 Storage Controller

  • 2 x Haswell 12 core 2.5GHz CPUs:
    • X2S X-Bricks will have 384GB of RDIMM RAM per controller
    • X2R X-Bricks will have 1TB of LRDIMM RAM per controller
  • 4 x 12Gbps SAS connections to DAE
  • 2 x 56Gbps FDR Infiniband connections to cluster fabric
  • Quad port host interface card:
    • 2 x 16Gbps Fibre Channel plus 2 x 10Gbps iSCSI with hardware offload
    • 4 x 10Gbps iSCSI (with hardware offload on two ports)

       

       

      As noted before, the only difference between the X2S to the X2R storage controller is the size of the drives its accommodating and the RAM, both now have 2 X 16GB FC ports (from the 2 X 8GB FC in X1) and 2 X 10Gbe iSCSI,

You can guess why we need a port for replication, right?

Hardware is a commodity and since intel do not keep up with the “Moore’s Law” anymore we had to innovate really hard around the software stack which is a great Segway to jump over to part 2 of the series

Finally, we want to be greener and will support AHSRAE A3 which will allow the DC to be up to 40C! and so save energy and money

Density & Raw Capacity

Going greener with ASHRAE A3 was only a starting point, we continue to be more efficient. Density was improved dramatically!

When we compare to our existing X1 arrays, which had 40TB Raw in 6U – now we will propose 138TB and later 276TB RAW in 4U. Up to 2.2PB in 1 standard Rack!!!


Overall Data Center Efficiency

When we convert, the density savings into Data Center efficiency the improvement per RU (Rack U) is starting from x3.8 and will increase to factor of x7.6 in Usable and Effective capacities.

The fully populated system will support around 2PB of usable flash capacity and more than 11PB effective when we take in consideration: deduplication, compression and Copies savings.

The presented agility shall address the high demand from all our customers, when we must address their consolidation requirements.


Price Drop for Single X-Brick

We understand that being efficient is only a first step, you expect us to improve in price as well. So, we did it – more than 50% price reduction.

Two main factors helped us to achieve the goal:

  • Better controller amortization.
    • Now we can put up 72 drives per X-Brick , which will result in more than 700TB of effective capacity per brick!
  • SSDs optimized for today’s workloads.
    • Using SSD with lower endurance (less writes per day) allows us to decrease the price.


Metadata-aware Architecture matters!

Everyone can claim for use of cheaper drives, but not everyone can prove the decision is the right one.

We continuously monitor and analyze the data from our huge install base (more than 7000 systems)!

When we analyze the SSD endurance across our install base, we see that most of our systems don’t exceed even 0.5 Write per Day.

While extrapolating the endurance data on the existing X1 SSDs will result in: Existing SSDs won’t wear out for next 19 years.


Agility

When we approach the customers with the question of what are the most important features of an enterprise array, right after availability and price they mention agility.

Today’s customers’ requirements are changing so fast, being flexible is more important than it was before.

We’ve listened, we’ve promised it, and we did it!

No more adding in pairs, Granular Scale-Out (1,2,3,4,5,6,7,8) will allow any combination of bricks between 1-8.

But wait, we didn’t stop there, we’ve finally added Capacity Scale-Up.

No more “add compute even if you need only capacity” – Now we allow to add SSD Drives to the same brick. Grow with 6 drive increments.

Being agile, doesn’t assume you compromise on the standards! As is you expect from us: Scaling is online and without performance impact!

you can also see below a quick video I took with Ronny Vatelmacher, our hardware product manager

The other big improvement of X2 is the improved data reduction, we were able to greatly improve it, from what we saw so far, the improvement is around 25% !

Leave a Reply Cancel reply