Hi,

Today, we will have a guest post by one of our best TC’s – Genady Paker

 

We completed to work on a design of a small IT kit based on VMware ESXi Product and VNXe3300 storage system.

Purpose of such kit is to be suitable for ROBO (Remote office, Branch office) offices with no deep IT knowledge. It has to be build from minimum components, scalable, with good performance and comfortable management tools. Additionally, the system must be with no single point of failure. In our solution we are using tools as Virtual Storage Integrator for VMware (VSI), VNXe built-in configuration Wizards for storage provisioning.

So, the basic kit includes VNXe3300 system and two ESXi servers with direct iSCSI connection as following:

clip_image002

One of biggest challenges was to create simple and stable network configuration between ESXi server and VNXe storage.

As shown above each ESXi server has two physical Ethernet links, one to each service processor of VNXe. For failover purposes we made a team between these two NIC’s.

What we found that during user initiated reboots the storage systems performing failover process for all resources of rebooted SP (IP, MAC, LUN…). On the other hand the NIC itself still be alive on BIOS level. To ensure correct NIC team failover behavior a specific “Network Failover Detection” mode must be set on ESXi level.

See below an exact vSwitch and VMKernel configurations.

The mode “Beacon Probing” ensures whether a link operational by sending several data packets and have to get acknowledge. In our case it will ensure that NIC of failed SP is “semi” alive and will send data packets via second standby link.

So what is beacon probing?

Beacon probing is a network failover detection mechanism that sends out and listens for beacon probes on all NICs in the team and uses this information along with link status to determine link failure. Beacon probing detects failures, such as cable pulls and physical switch power failures on the immediate physical switch and also on the downstream switches.

More information about it could be found in the following KB:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005577

ESXi Network Configuration

vSwitch configuration

VMKernel configuration

The described configuration was tested by many scenarios:

· Physical Network Link disconnection;

· Active SP reboot;

· Active SP pooling out;

All tests above were passed successfully and proved configuration stability in any scenario.

2 Comments »

  1. Hi Genady.

    I like much this solution for our remote offices. I think it could be a cheap and easy solution.

    I like to test this solution but I have some doubt.

    Is it necesary to do the port binding technique to every nics?

    Is it posible to have 2 vmfs volume, one in SP-A and other in SP-B? Do VNXe work active-active?

    Could this solution use load balancing? I don’t think so. The performance with one solution with switch and load balance would be much better. Have you compared the performance with a switch solution.

    Thank you, very much.

    Good work.

Leave a ReplyCancel reply