What is Storage Area Network Host and How does it work? | technoworldnow

The Storage Area Network (SAN) host is the server computer that actually draws the space from the SAN storage system. You can mount a virtual drive to the network host server, assign a drive letter to it and format it as per the operating system of the host system.

Storage Basics and Fundamentals | technoworldnow.com

The goal of connecting a host to a SAN is to reach the LUN defined on the storage array. The host in SAN infrastructure always plays the role of the initiator.

To be able to communicate with the storage array, the fabric needs to be configured so that the HBA adapters of the host belong to the proper zones. Additionally, if the storage array supports Selective Storage Presentation, the host must be allowed to communicate to the LUN on the storage array.

How does the host communicate on Fibre Channel?

For hosts to communicate within a SAN, they require a Fibre Channel host bus adapter. HBAs support all major operating systems and hardware architectures.

  • To communicate with Fibre Channel infrastructure, the host requires a host bus adapter (HBA)
  • Each HBA port physically connects to the fabric and becomes visible to the SAN
  • Port behavior depends on the HBA driver configuration and type and on the configuration of the fabric port

Converged Network Adapter

A converged network adapter, or CNA combines a traditional HBA that is used in storage networks, a NIC that is used in Ethernet networks, and two protocols: the Fibre Channel over Ethernet protocol and Converged Enhanced Ethernet protocol.

CNA interfaces are designed to provide regular Fibre Channel and NIC interfaces to the hosts, so regular Fibre Channel and NIC drivers are used. Internally, a CNA adapter uses the FCoE engine to handle traffic. This FCoE engine is invisible to the host to which the CNA adapter is connected.

N_Port ID virtualization

What is NPIV?

  • N_Port ID Virtualization (NPIV) is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.
  • NPIV makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual WWN.
  • HPE offers an NPIV-based Fibre Channel interconnect option for server blades called Virtual Connect.

N_Port ID Virtualization is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.

NPIV provides a Fibre Channel facility for assigning multiple N_Port IDs to a single N_Port, thereby allowing multiple distinguishable entities on the same physical port. In other words, it makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual World Wide Name (WWN).

The NPIV protocol requires an N_Port, which is typically an HBA or any device that acts as an NPIV gateway, and a fabric, which is usually a Fibre Channel switch, so that the N_Port can request and acquire multiple addresses from the fabric.

NPIV

NPIV allows a single HBA, called an “N_Port,” to register multiple World Wide Port Names (WWPNs) and N_Port identification numbers.


NPIV allows a single HBA or target port on a storage array to register multiple World Wide Port Names and N_Port IDs. This enables each virtual server to present a different World Wide Name to the SAN, which in turn means that each virtual server will see its own storage, but no storage from other virtual servers.

Server Virtualization with NPIV

NPIV allows multiple virtual operating system instances on the same physical machine to have individual World Wide Port Names. This means they can be treated as discrete entities by the network devices. In other words, the virtual machines can share a single HBA and switch port while receiving individualized network services such as zoning.

The HBA NPIV implementation virtualizes the physical adapter port, so a single physical Fibre Channel adapter port can function as multiple logical ports. In this implementation, each physical port can support up to 256 virtual ports.

NPIV I/O virtualization enables storage administrators to deploy virtual servers with virtual adapter technologies, creating virtual machines that are more secure and easier to manage.

HPE Virtual Connect Fibre Channel

HPE Virtual Connect is a set of interconnect modules and embedded software for HPE BladeSystem c-Class enclosures that simplifies the setup and administration of server connections. HPE offers the Virtual Connect 4-gigabit and 8-gigabit Fibre Channel Modules, two HPE Virtual Connect 1/10-gigabit Ethernet modules, the Virtual Connect Flex-10 10-gigabit Ethernet Module, and for management, HPE Virtual Connect Manager and HPE Virtual Connect Enterprise Manager.

Although Virtual Connect uses the standard HBAs within the server, it uses a new class of NPIV-based Fibre Channel interconnect modules to simplify the connection of those server HBAs to the data center environment.

Virtual Connect also extends the capability of the standard server HBAs by providing support for securely administering their Fibre Channel WWN addresses.

HPE Virtual Connect FlexFabric

  • Up to four physical functions for each server blade adapter network port
  • The physical function corresponds to the HBA
  • Four physical functions share the 10 Gb link
  • One of the four physical functions can be defined as the Fibre Channel HBA, and the remaining three will act
    as NICs
  • Each physical function has 100% hardware-level performance, but the bandwidth might be fine-tuned to quickly adapt to virtual server workload demands

Virtual Connect FlexFabric provides up to four physical functions for each blade-server-adapter network port, with the unique ability to fine-tune the bandwidth to adapt to virtual server workload demands quickly.

The system administrator can define all four connections as FlexNICs to support only Ethernet traffic, like with Virtual Connect.

Additionally, one of the physical functions can also be defined as a FlexHBA for Fibre Channel protocol support or as an iSCSI initiator for iSCSI boot protocol support. Each function has complete hardware-level performance and provides the I/O performance needed to take full advantage of multicore processors and to support more virtual machines per physical server.

What is Boot from SAN?

The process of booting a server using external storage devices over a SAN

  • Used for server and storage consolidation
  • Minimizes server maintenance and reduces backup time
  • Allows for rapid infrastructure changes

The process of loading installed operating system code from a storage device to the computer memory when the computer is powered on is referred to as the “boot process.” Typically, HPE ProLiant servers boot operating systems from internal SCSI, IDE, SATA, and SAS storage devices.

However, when you boot the operating system using external storage devices such as Fibre Channel HBAs and RAID arrays over a SAN instead of server-based internal boot devices, the boot process is referred to as “Boot from SAN.”

Multipath Concept

  • Multipath I/O (MPIO) provides automatic path failover between the server and the disk arrays
  • Some multipath solutions provide load balancing over multiple HBA paths

A redundant SAN design will present your host with multiple paths to the same LUN. Without multipath software, a server would see all of the paths to the LUN defined on the storage array, but it would not understand that the multiple paths lead to a single LUN. That would lead to the situation where the server showed four distinctive LUNs, instead of having single LUN through multiple paths.

A multipath server driver helps the server to sense that multiple paths are leading to the same physical device, and it enables a host to correctly present the LUN as single device.

What is Path Failover?

Failover is handled by MPIO, and it is supported via services, drivers, and agents

It is transparent to the applications

The administrator has to configure the primary and alternate paths

One of the benefits of MPIO is support for automatic path failover. Automatic path failover is initiated when there is a failure of some of the data paths.

Changes in a SAN configuration are detected by the drivers, services, and agents that are part of the MPIO solution. I/O requests that were using the failed path are redirected to the remaining functioning paths.

The whole procedure is transparent to the application running on the affected host, and all events are logged to the system event database.

What is Load Balancing?

MPIO load balancing goes across all installed HBA ports in a server to increase throughput and HBA utilization. You can configure different load balancing policies.

The availability of those options depends on the multipath software and hardware. Generally, four modes are supported: –Round robin, –Least I/O, –Least bandwidth, and –Shortest queue.  

  • MPIO provides load balancing across all installed HBAs (ports) in a server
  • There are various load-balancing policies, depending on the multipath software:
  • Round robin
  • Least I/O
  • Least bandwidth
  • Shortest queue (requests, bytes, service time)

MPIO solutions consist of two components: –Drivers developed by Microsoft and –Device-specific modules developed by storage vendors to Microsoft standards.

MPIO uses redundant physical path components to eliminate single points of failure between servers and storage. It increases data reliability and availability, reduces bottlenecks, and provides fault tolerance and automatic load balancing of I/O traffic.

Although multipathing and clustering both provide high availability, multipathing by itself does not protect against hardware or software failures because it only ensures the redundancy of cabling, adapters, and switches that are native to the Microsoft multipathing software.

Fibre Channel advanced feature

Now we will look at some of the advanced features you might find in Fibre Channel environments.

Each port in the switched fabric has its own unique 24-bit address. With this 24-bit addressing scheme comes a smaller frame header, and this can speed up the routing process. This frame header and routing logic optimizes the Fibre Channel fabric for high-speed switching of frames.

The 24-bit addressing scheme also allows for up to 16 million addresses, which is an address space larger than any practical SAN design in existence today. The 24-bit address has to be connected to the 64-bit address that is associated with World Wide Names.

Fibre Channel name and address

  • 24-bit addresses are automatically assigned by the topology to remove the overhead of manual administration
  • Unlike the WWN addresses, port addresses are not built-in
  • The switch is responsible for assigning and maintaining the port addresses
  • The switch maintains the correlation between the port address and the WWN address of the device
    on that port
  • The Name server is a component of the fabric operating system running on the switch

The 24-bit address scheme also removes the overhead of manually administering addresses because it allows the topology itself to assign addresses. This is not like World Wide Name addressing, in which the addresses are assigned to the manufacturers by the Institute of Electrical and Electronics Engineers standards committee and then built into the device, like naming a child at birth.

If the topology itself assigns the 24-bit addresses, then someone has to be responsible for the addressing scheme from WWN addressing to port addressing.

In the switched fabric environment, the switch itself is responsible for assigning and maintaining the port addresses. When a device with its WWN logs in to the switch on a specific port, the switch assigns the port address to that port, and the switch also maintains the correlation between the port address and the WWN address of the device on that port. This function of the switch is implemented by using a Name server.

The Name server is a component of the fabric operating system, and it runs inside the switch. It is essentially a database in which a fabric-attached device registers its values.

Other benefits of dynamic addressing are that it removes the potential element of human error in address maintenance and it provides more flexibility in additions, moves, and changes in the SAN.

Fiber Channel port address-1

A 24-bit port address consists of three parts:

  • The domain consists of bits from 23 to 16.
  • The area consists of bits from 15 to zero eight.
  • The port, or arbitrated loop physical address known as the AL_PA, consists of bits from zero seven to zero zero.  

Fiber Channel port address -2

What is the significance of each part of the port address?

The domain is the most significant byte of the port address. It is the address of the switch itself. One byte allows for up to 256 possible addresses, but because some of these are reserved (like the one for broadcast), only 239 addresses are actually available.

This means that you can have as much as 239 switches in your SAN environment. If you have multiple interconnected switches in your environment, the domain number allows each switch to have a unique identifier.

The area field provides 256 addresses. This part of the address identifies the individual FL_Ports that are supporting loops, or it can be used as the identifier for a group of F_Ports; for example, a card with more ports on it. This means that each group of ports has a different area number, even if there is only one port in the group.

The final part of the address identifies the attached N_Ports and NL_Ports. It provides for 256 addresses.

To determine the number of available addresses, you can use a simple calculation, where you multiply the numbers of domains, areas, and ports. This means that the total number of addresses available is fifteen million, six hundred sixty-three thousand, one hundred four, which is the product of multiplying 239 domains times 256 area times 256 ports.

Simple Name Server

  • The Name server stores information about all of the devices in the fabric
  • An instance of the Name server runs on every Fibre Channel switch in a SAN
  • A switch service that stores names, addresses, and attributes for up to 15 minutes and provides them as required to other devices in the fabric

When you are connecting a Fibre Channel device to a Fibre Channel switch, that device must register itself with that switch. This registration includes host and storage identifiers such as the device network address and a World Wide Name.

On top of this, communication parameters are also exchanged. The Fibre Channel device registers itself with a Simple Name Server, or SNS, which serves as a database for all Fibre Channel devices attached to the SAN. The Fibre Channel switches perform the SNS function.

10-bit addressing mode

The number of physical ports on the switch is limited to 256 by the number of bits in the area part of the Fibre Channel address. Director switches such as the Brocade DCX and DCX-four support Virtual Fabric, where the number of required ports might easily grow to more than 256.

A ten-bit addressing mode allows for the support of up to one thousand twenty-four F_Ports in a logical switch. This is achieved by borrowing the most significant two bits from the ALPA field of the Fibre Channel address.

Although this schema is flexible in supporting a large number of F_Ports, it also reduces the number of NPIV – Loop devices supported on a port to 64.

Arbitrated loop addressing

Fibre Channel specifies a three-byte field for the address used in routing frames. In an arbitrated loop, only one of the three bytes, containing the least significant eight bits, is used for the arbitrated loop physical address. This address is used in the Source and Destination IDs of the frames transmitted in the loop.

Of the full 24-bit address defined by the Fibre Channel standard, only eight bits are used by the ALPA. Bits eight to 23 are used for the FL_Port identifier, and the full 24 bits are used by an N_Port in a fabric switch environment.