It helps when it comes time to configure, and really helps with reducing mistakes in the configuration.
I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.
When you do this and configure all the software properly (VMWare in my case), you can create a configuration that allows load balancing and fault tolerance.
Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured v Disk.
This includes time, management port settings, unit names, friendly names, and most importantly host connection settings.
I configured all the host ports for i SCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph.
Even though we have the 4 1Gb i SCSI modules, we aren’t using them to connect to the SAN. To connect the SAN to the servers, we purchased 2 X HPe Dual Port 10Gb Server SFP NICs, one for each server.
The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.
If you have a multipath configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place.
When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller.