The Dell EMC™ Flex family of products is powered by PowerFlex– a scale-out block storage service that enables customers to create a= resilient server SAN or hyper-converged infrastructure on x86 server hardware. The Flex family accommodates a wide variety of deployment options, with multiple OS and hypervisor capabilities, and is ideal for applications requiring high performance and ease of management as one scales from small to large. The Flex family currently consists of a rack-level and a node-level offering: Dell EMC VxRack FLEX and the Dell EMC VxFlex Ready Node. This document is primarily relevant to the VxFlex Ready Nodes, but will be of interest to anyone wishing to understand the networking required for a successful VxFlex OS-based storage system.
Dell EMC VxRack FLEX, on the other hand, is a fully engineered, rack-scale hyper-converged system that delivers flexibility, scalability and performance for the enterprise data center. It is a quick and easy to deploy engineered solution in which the networking is already configured and optimized. For other Flex family solutions, one must design and implement an appropriate network. PowerFlex is used to build robust, enterprise storage systems., and one of the key architectural advantages of PowerFlex is that it distributes load evenly and symmetrically across all contributing server nodes. This eliminates concerns associated with bottlenecks at storage protocol endpoints. It also frees storage administrators from micromanagement by moving the granularity of operations away from individual components to the clustered infrastructure. Because networks are the central component of data center infrastructure, understanding their relationship to PowerFlex is crucial for a successful deployment. A successful VxFlex Ready Node deployment depends on a properly designed network topology. This guide provides details on network topology choices, network performance, hyper-converged considerations, Ethernet considerations, dynamic IP routing considerations, PowerFlex implementations within a VMware® environment, validation methods, and monitoring recommendations.

PowerFlex Functional Overview
PowerFlex is software that creates a server and IP-based SAN from direct-attached storage to deliver flexible and scalable performance and capacity on demand. As an alternative to a traditional SAN infrastructure, PowerFlex combines HDD, SSD, and NVMe media to create virtual pools of block storage with varying performance tiers. PowerFlex provides enterprise-grade data protection, multi-tenant capabilities, and add-on enterprise features such as QoS, thin provisioning, and snapshots. PowerFlex supports physical and virtualized servers, and has been proven to deliver significant TCO savings vs. traditional SAN. PowerFlex provides the following benefits:
Massive Scale – PowerFlex can scale from three to 1024 nodes. The scalability of performance is linear with regard to the growth of the deployment. As devices or nodes are added, PowerFlex automatically redistributes data evenly, resulting in a fully balanced pool of distributed storage.
Extreme Performance – Every device in a PowerFlex storage pool is used to process I/O operations. This massive I/O parallelism of resources eliminates bottlenecks. Throughput and IOPS scale in proportion to the number of storage devices added to the storage pool. Performance and data protection optimization is automatic. Component loss triggers a rebuild operation to preserve data protection. Addition of a component triggers a rebalance to increase available performance and capacity. Both operations occur in the background with no downtime to applications and users.
Compelling Economics – PowerFlex does not require a Fibre Channel fabric or dedicated components like HBAs. There are no forklift upgrades for outdated hardware. Failed and outdated components are simply removed from the system. PowerFlex can reduce the cost and complexity of the solution and has been proven to result in significant TCO savings vs. traditional SAN
Unparalleled Flexibility – PowerFlex provides flexible deployment options. In a two-layer deployment, applications and the storage software are installed on separate pools of servers. A two-layer deployment allows compute and storage teams to maintain operational autonomy. In a hyper-converged deployment, applications and storage are installed on a shared pool of servers. This provides the lowest footprint and cost profile. The deployment models can also be mixed to provide independent scaling of compute and storage resources.
Supreme Elasticity – Storage and compute resources can be increased or decreased whenever the need arises. The system automatically rebalances data on the fly. Additions and removals can be done in small or large increments. No capacity planning or complex reconfiguration is required. Rebuild and rebalance operations happen automatically without operator intervention.
Essential Features for Enterprises and Service Providers – With VxFlex OS, you can limit the amount of performance (IOPS or bandwidth) that selected customers can consume. Quality of Service allows for resource usage to be dynamically managed, addressing any bully workload scenarios. PowerFlex also offers instantaneous, writeable snapshots for data backups and cloning. DRAM caching enables you to improve read performance by using server RAM. Any group of servers hosting storage that may fail together (such as nodes in the same rack) can be grouped together in a fault set. Fault sets can be defined to ensure data mirroring occurs outside the failure group, improving business continuity. Volumes can be thin provisioned, providing on-demand storage as well as faster setup and startup times.
PowerFlex also provides multi-tenant capabilities via protection domains and storage pools. Protection Domains allow you to isolate specific servers and data sets. Storage Pools can be used for further data segregation, tiering, and performance management. For example, data that is accessed very frequently can be stored in a flash-only storage pool for the lowest latency, while less frequently accessed data can be stored in a low-cost, high-capacity pool of spinning disks.

PowerFlex Software Components
PowerFlex fundamentally consists of three types of software components: the Storage Data Server (SDS), the Storage Data Client (SDC), and the Meta Data manager (MDM).

A logical illustration of a PowerFlex deployment. Each volume available to an SDC is distributed across many systems running the SDS. The Meta Data Managers (MDMs) reside outside the data path, and they are only consulted by SDCs when an SDS fails or when the data layout changes.

The Storage Data Server (SDS) aggregates and serves raw local storage in a server as part of a PowerFlex cluster. The SDS is the server-side software component. A server that takes part in serving data to other nodes has an SDS service installed and running on it.
A collection of SDSs form the PowerFlex persistence layer.
SDSs maintain redundant copies of the user data, protect each other from hardware loss, and reconstruct data protection when hardware components fail. SDSs may leverage SSDs, PCIe based flash, spinning media, RAID controller write caches, available RAM, or any combination thereof. SDSs may run natively on Windows or Linux, or as a virtual appliance on ESX. A PowerFlex cluster may have 1024 nodes, each running an SDS. Each SDS requires only 500 megabytes of RAM. SDS components can communicate directly with each other. The SDS components are fully meshed. And SDSs are optimized for rebuild, rebalance, and I/O parallelism. Data layout between SDS components is managed through storage pools, protection domains, and fault sets.
Client volumes used by the SDCs are placed inside a storage pool. Storage pools are used to logically aggregate types of storage media at drive-level granularity. Storage pools provide varying levels of storage service priced by capacity and performance. Protection from node, device, and network connectivity failure is managed with node-level granularity through protection domains. Protection domains are groups of SDSs where replicas are maintained. Fault sets allow large systems to tolerate multiple simultaneous failures by preventing redundant copies from residing in a single node, rack or chassis.

The Storage Data Client (SDC) allows an operating system or hypervisor to access data served by PowerFlex clusters. The SDC is a client-side software component that can run natively on Windows®, Linux, IBM AIX®, or ESX®. It is analogous to a software initiator, but is optimized to use multiple networks and endpoints in parallel. The SDC provides the operating system or hypervisor running it with access to logical block devices called “volumes”. A volume is analogous to a LUN in a traditional SAN. Each logical block device provides raw storage for a database or a file system.
The SDC knows which Storage Data Server (SDS) endpoints to contact based on block locations in a volume. The SDC consumes distributed storage resources directly from other systems running VxFlex OS. SDCs do not share a single protocol target or network end point with other SDCs. SDCs distribute load evenly and autonomously.
The SDC is extremely lightweight. SDC to SDS communication is inherently multi-pathed across SDS storage servers, in contrast to approaches like iSCSI, where multiple clients target a single protocol endpoint. This enables much better performance scalability.

PowerFlex 3.5 Overview:

PowerFlex v3.5 introduces Native Asynchronous Replication, a new HTML-5 based UI, a new maintenance mode called Protected Maintenance Mode (PMM), Secure Snapshots, SDC Authentication and other resiliency enhancements.

This release comes with the rebranding of “VxFlex OS” to “PowerFlex

PowerFlex 3.5 introduces a new HTML based UI. This new UI replaces the previous Java GUI interface. With the new 3.5 Web UI there are no Java version dependencies that need to be managed.

To access the new PowerFlex Web UI, all the user needs is do is open one of the popular Web browsers, enter his user credentials and start monitoring and managing the system.

PowerFlex boasts of a truly distributed architecture. PowerFlex can be deployed as a Server SAN or HCI infrastructure. PowerFlex deployments vary from single digit to hundreds of Nodes clusters. When designing the new GUI system, different implementation models and sizes were considered, targeting an optimal user experience for all use cases.

Dashboard – designed for big & small PowerFlex clusters

The Dashboard was designed so the user can quickly determine the clusters’ overall health status. If any critical or major alerts are open, a user can easily access them. The system overview displays all of the physical and logical components of the cluster. In a single glance, the user can see the number of SDS nodes of the cluster, the number of hosts (SDCs) accessing the cluster, including the total number of storage devices, volumes and protection domains.

In the Dashboard the key performance metrics: Latency, IOPs, BW are always displayed.

The Performance numbers of the PowerFlex cluster depends on the number of nodes (SDSs) comprising the cluster. There can be a huge performance difference between small and large clusters. The IOPs Performance Gauge was designed to give a user an estimation of the performance potential of a cluster, and its current performance usage. The IOPS Performance gauge intent is not to provide an exact calculation of the performance (as this number depends not only on the cluster resources but also on the application usage patterns), but rather for the administrator to get an indication to the additional horsepower at his hands or the lack of it.

The Performance tab in the dashboard provides a time-series graphical widget that displays simultaneously all of the key performance metrics with their read/write distribution.

As a clustered system, the nodes of a PowerFlex cluster inter-communicate. The internal bandwidth usage is displayed to the user: Internal node communication, Data rebalance and rebalance activities – in case of a node failure or scale-out activities. With this display, the user can see how the PowerFlex resources are being utilized.

The Capacity View displays the key capacity and savings metrics. A user can easily see the amount of volumes provisioned, the amount of user data written by the hosts, the net physical resources consumed and the savings gained with PowerFlex data savings technologies (compression, snapshot technology and thin provisioning).

Web GUI – designed for Scale

A single PowerFlex cluster can scale to hundreds of nodes (yes we have quite a few customers with such scale). Usually this translates to a system with many deployed workloads and environments, with thousands of provisioned volumes. To support this, every object list in the PowerFlex GUI supports full sort, search and filter capabilities. So for example, if I search for a Volume Name prefix, it will search all the thousands of Volumes configured, and output the select volumes.

Easy navigation using the Navigation Tree

Each object has a detailed pane that displays all its information, including its topology view and related objects. This is the detailed view of an SDS Node:

In a single glance, we can see and navigate to the Protection Domain (PD) that the SDS belongs to, the Devices that it is connected to, the Storage Pool (SP) that it is servicing, including its storage volumes.

As another example, this is the detailed view of a selected Volume:

Here we see the topology view of the Volumes and which resources are servicing (SDSs) and being serviced (SDCs) by the volume. In addition, we are a click away from opening tabs that show all the hosts (SDCs) that are connected to the Volume, and the Vtree and Consistency group it belongs to.

New 3.5 Feature: Native replication

In version 3.5 we introduced Native Replication. The native replication architecture is built in a distributed architecture. SDR nodes (replication nodes) can be added as needed to support the replication load and requested RPOs.

The WebUI displays a dedicated replication Overview page:

In this Overview, the administrator can easily determine the Health and Compliance of all the RCG replications defined in the system. The user can see the direction of the replication, and the key performance metrics driven by the replication activity.

If an RCG is selected from the list of RCGs, its full information will be displayed in the details pane.

The administrator can easily view its RPO Compliance, Health status and key metrics. In addition, a replication visualization is presented displaying the active replication direction between the replication Peer Systems.


The PowerFlex 3.5 WebUI provide and end-to-end storage management and monitoring system. Users can easily determine the health of the system and be alerted of all important events. The system was designed specifically to manage a highly distributed system, that can scale to very large clusters with hundreds and thousands of supported objects.

Below, you can see a demo of the new UI

You can download PowerFlex 3.5 by clicking the screenshot below

you can learn more about PowerFlex by going to the official landing page (click the screenshot below)

Leave a ReplyCancel reply