Open Dell Storage Manager and go to Storage & Copy Services and right click on Storage Array Name and Click Create Disk Group. Give the Disk Group a name and select the RAID type (5 in my case) and the number of Disks you want to use for the Disk Group. You can create multiple Disk Groups if you wish and you can cut the SAN up however you like and depends on requirements. This box is just for DR and I hope will never get used so I will have 1 big Disk Group @ RAID5. Once complete you then need to create a Virtual Disk. Right Click under Disk Groups on the Free Capacity and select Create Virtual Disk.
Choose the Size and give the Virtual Disk a Name. See above picture and you can see I have already created a Virtual Disk called “VD-CAR-HEM-10”.
You will be able to see at the bottom the Initialization progress!
Go to Host Mappings Tab, Right Click Default Group and Define new Host
Give the Host a Name (suggest giving same name as you gave to the Host e.g. CARESXi03)
You will then be able to select the host under Known Hosts (should already be discovered) and you can create a custom label (again I gave same as host name CARESXi03)
Press Next, then Select OS (Windows)
Then Press Yes – this host will share access (needed for vMotion. you only don’t need to share access if you want to boot from SAN as the host must have sole access to the LUN)
Enter a name for the Group that will define the Host (you can assign Multiple Hosts to 1 group)
Right Click on Host and click “Add LUN Mapping…”
See settings below (pretty self explanatory – Notice why I called the Virtual Disk 10 so I can match to a LUN ID 🙂 )
Go back to vSphere and back to Storage and click “Rescan All”
Then go to “Add Storage” Select Disk/LUN and you should see the LUN and Host Mapping you created show in the list:
Format as VMFS-5, Enter a DataStore Name
Once Complete you will see it show in Datastores and to confirm all 8 Paths to the LUN you can go to Storage Adapters
Run some tests and pull some cables to see what happens!
You just need to repeat this process on Host#2.
NOTE you need to enable jumbo frames (9000MTU) on the iSCSI switches as well!
Now you just need to configure the Intel NIC Card for Management and normal Server Traffic.
I like to use two ports for Management and two for Network Traffic but you can do this anyway you like.
I like to configure the Management traffic in an Active/Standby configuration. I would plug one cable to the primary switch and the second the secondary switch for redundancy.
To Configure the above open vSphere, click the Host and go to Configuration then Networking.
Click Properties on the vSwitch that has the Management Network (my case it was vSwitch0)
Click Network Adapters and add the Second NIC:
Go back to Ports and click Edit on vSwitch as you need to change the following on both:
Set One of the Adapters as Standby:
NOTE: If you wanted to have a separate Management subnet you can do this and run it under the same virtual switch like this:
Now that’s configured try pulling one of the cables and you should see a 5/6 ping response drop before the network picks up again.
Next configure a new vSwitch for LAN traffic and do exactly the same as above except leave the adapters as Active/Active. Instead of Selecting VMKernel please select “Virtual Machine” after you click “Add”
Your LAN will look like this (ignore the unplugged cable sign )