Quantcast
Channel: TechShifter» Cisco
Viewing all articles
Browse latest Browse all 10

UCS FCoE End to End (For real this time)

0
0

I’m a big fan of FCoE in converged environments, especially with Cisco UCS. I’ve previously written setup articles for FCoE End-to-End with Cisco UCS (Part 1, Part 2, Part 3), however in that case there was a single piece of the solution that had to be “pure” FC…the link between the UCS Fabric Interconnects and the upstream switches (Cisco Nexus in my examples). With Cisco UCS firmware 2.1, we are now able to eliminate that and replace it with FCoE all the way.

There are still very few people doing this in production if Google results are to be believed. I wanted to see how feasible / difficult this would be to implement on an existing production setup. The only “real” experience I was able to find was a few posts on the Cisco support forums where a user was reporting extreme performance problems with the solution. So that needed to be something heavily tested once I set it up. Here is the story of getting this configured.

Existing Environment

The below is a simplified version of the live setup before any changes were made:

FCoE EndToEnd3

Blue links are 10GB Ethernet (Twinax Cabling), green links are 4GB FC (over Fiber cabling). With the exception of the 10GB links between the FI’s and the Nexus switches, FC is running encapsulated as FCoE over the same cabling that carries other network traffic.

Caveats

According to the Cisco docs, you cannot configure ports on 6100 series Fabric Interconnects as Unified Ports (only as “Server Uplink” or “FCoE Uplink”); you require 6200 series FI’s to use “Unified” ports. This doesn’t make a lot of sense to me; as the entire UCS infrastructure runs on the concept of Unified ports, I don’t understand why this would be the case. Sure enough, the GUI will not let you create Unified port uplinks on a 6100 series (I actually think I discovered a GUI bug to allow this, but for the purposes of this post I’m going to stick to what’s supported).

This means that if you’re running 6100 series FI’s, while you can run FCoE, you will need separate uplinks for Network and Storage (much like we had before with separate Network and FC cables). However, it’s still an interesting exercise to go through. For this post, I will be limited by this caveat as 6100 series gear is what I have access to at the moment.

Cabling

First I ran two 10GB TwinAx cables from two unused ports on each of my FI’s to two unused ports on my Nexus Switches. I used ports Eth2/3-4 on the FI Expansion modules I happen to have, and ports Eth1/19-20 on my Nexus switches.

Nexus Configuration

I created a port-channel for the new connections. In this setup, I use the already-configured VLAN 3210 (Fabric A) and 3211 (Fabric B) for my SAN A/B channels. I have VSAN 3210 and 3211 mapped to these VLAN’s. Since this connection will be used only for storage traffic, I limit the allowed list to these VLAN’s only. If you were configuring this channel as a Unified channel carrying network traffic as well, you would simply expand your allowed VLAN list appropriately. The Switch configuration is simple.

Of note, you will need to define a Virtual Fibre Channel (VFC) port to serve as the FC object running over the Ethernet port channel. As I was using Eth1/19 and 1/20 in my setup, I created vfc1920 to keep things straight in my mind. You need to specifically add a “no shutdown” to this (as with all FC ports on Nexus switches, they are shutdown by default).

Switch A

interface port-channel202
  description FCoE Downlink to UCS Fabric A
  switchport mode trunk
  switchport trunk allowed vlan 3210
 
interface Ethernet1/19
  description Downlink to UCS Fabric A
  switchport mode trunk
  switchport trunk allowed vlan 3210
  channel-group 202 mode active
 
interface Ethernet1/20
  description Downlink to UCS Fabric A
  switchport mode trunk
  switchport trunk allowed vlan 3210
  channel-group 202 mode active
 
interface vfc1920
  bind interface port-channel202
  no shutdown

Switch B

interface port-channel202
  description FCoE Downlink to UCS Fabric B
  switchport mode trunk
  switchport trunk allowed vlan 3211
 
interface Ethernet1/19
  description Downlink to UCS Fabric B
  switchport mode trunk
  switchport trunk allowed vlan 3211
  channel-group 202 mode active
 
interface Ethernet1/20
  description Downlink to UCS Fabric B
  switchport mode trunk
  switchport trunk allowed vlan 3211
  channel-group 202 mode active
 
interface vfc1920
  bind interface port-channel202
  no shutdown

UCS Configuration

First I created an FCoE Port channel from the SAN tab.

FCoE UCSUplink 1

A very important point, which isn’t obvious from the GUI (and the source of the bug I mentioned earlier): Make sure you understand that there is only one table of Port-Channels on each UCS Fabric…there isn’t a separate list for FCoE and Network. Some of you may say “duh” but it’s really not obvious in the GUI. On a 6100 series if you create an FCoE port channel using an ID number of a Network port channel that is already in existence, you will end up with a “Unified” Channel including both the new and old links (and likely disrupting traffic in the process). I did this by accident once, and I ended up with Unified ports on my 6120′s. (I undid it, but want to go back sometime to see if I can make that work).

I set the ID to match the number I used on the switch side (202), and selected the ports I wanted to add to the channel (2/3 and 2/4). This process will automatically change the port type of the ports you selected into “FCoE Uplink Ports”.

The only other config step to perform for the port channel itself is to select it on the left and set the VSAN identifier on the right (and click Save Changes). At this point you can see the ports and FCoE state as UP on the UCS side, and similarly on the Nexus side. You should also be able to do an show flogi database on the Nexus switches to see a login from your vfc port.

One item of note, I saw a fault on this interface that the FCoE Uplink was down for the default VSAN (2 in my case). This can be ignored, although I’m currently not aware of how to make the error go away.

Flipping Service from FC to FCoE

After following these steps, I now had multiple upstream storage paths that FC traffic can pin to, but all existing traffic was still be flowing over the FC cables. Obviously I wanted to see some traffic over the new links, and I definitely wanted to see if I was going to run into the same performance issues as my colleague from the Cisco forums.

The first thing I did was to create a couple of SAN Pin Groups for Fabric A/B that specifically point to the FCoE links. Modifying an existing server service profile, I set the vHBA’s to use these PIN groups. My first test was a vSphere 5.1 server that I had placed into Maintenance Mode first (just incase there were fireworks). After applying the settings, nothing appeared to happen on the vSphere side (a good sign!). I was able to verify on the Nexus switches (via show flogi database) that the server was now sending traffic over the vfc link.

Since I was performing this work on a production system, I went to the extra trouble to do this one server at a time. Never a bad thing to be overly cautious with production setups, but this was overkill. Later on I performed the same steps on a similar setup and was able to flip service over by simply shutting down the old FC uplinks. UCS, true to its nature, seamlessly re-pinned all upstream FC traffic to the new links without a hiccup.

The Big Worry: Performance?

I’m happy to report that performance has been excellent since the switch for me. I haven’t noticed anything with regards to the slowdowns reported by the Cisco Support forum thread (in there, the complaint was that booting a VM over FCoE took over 20 minutes, but on “regular” FC it was done in seconds). I see the same performance over both types of links.

Conclusion

Basically, it works. If you have a converged setup everywhere else in your UCS config, it’s nice to get rid of the non-converged links between the UCS FI’s and the upstream switches. Some may not feel it’s worth the effort on 6100 series systems (because of the inability to officially use “Unified” uplinks) but if nothing else, you will get to tell people you’re running FCoE End to End (which always gets storage people interested :) )


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images