Category: SAN

What is Storage Area Network Host and How does it work? | technoworldnow

The Storage Area Network (SAN) host is the server computer that actually draws the space from the SAN storage system. You can mount a virtual drive to the network host server, assign a drive letter to it and format it as per the operating system of the host system.

Storage Basics and Fundamentals | technoworldnow.com

The goal of connecting a host to a SAN is to reach the LUN defined on the storage array. The host in SAN infrastructure always plays the role of the initiator.

To be able to communicate with the storage array, the fabric needs to be configured so that the HBA adapters of the host belong to the proper zones. Additionally, if the storage array supports Selective Storage Presentation, the host must be allowed to communicate to the LUN on the storage array.

How does the host communicate on Fibre Channel?

For hosts to communicate within a SAN, they require a Fibre Channel host bus adapter. HBAs support all major operating systems and hardware architectures.

  • To communicate with Fibre Channel infrastructure, the host requires a host bus adapter (HBA)
  • Each HBA port physically connects to the fabric and becomes visible to the SAN
  • Port behavior depends on the HBA driver configuration and type and on the configuration of the fabric port

Converged Network Adapter

A converged network adapter, or CNA combines a traditional HBA that is used in storage networks, a NIC that is used in Ethernet networks, and two protocols: the Fibre Channel over Ethernet protocol and Converged Enhanced Ethernet protocol.

CNA interfaces are designed to provide regular Fibre Channel and NIC interfaces to the hosts, so regular Fibre Channel and NIC drivers are used. Internally, a CNA adapter uses the FCoE engine to handle traffic. This FCoE engine is invisible to the host to which the CNA adapter is connected.

N_Port ID virtualization

What is NPIV?

  • N_Port ID Virtualization (NPIV) is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.
  • NPIV makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual WWN.
  • HPE offers an NPIV-based Fibre Channel interconnect option for server blades called Virtual Connect.

N_Port ID Virtualization is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.

NPIV provides a Fibre Channel facility for assigning multiple N_Port IDs to a single N_Port, thereby allowing multiple distinguishable entities on the same physical port. In other words, it makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual World Wide Name (WWN).

The NPIV protocol requires an N_Port, which is typically an HBA or any device that acts as an NPIV gateway, and a fabric, which is usually a Fibre Channel switch, so that the N_Port can request and acquire multiple addresses from the fabric.

NPIV

NPIV allows a single HBA, called an “N_Port,” to register multiple World Wide Port Names (WWPNs) and N_Port identification numbers.


NPIV allows a single HBA or target port on a storage array to register multiple World Wide Port Names and N_Port IDs. This enables each virtual server to present a different World Wide Name to the SAN, which in turn means that each virtual server will see its own storage, but no storage from other virtual servers.

Server Virtualization with NPIV

NPIV allows multiple virtual operating system instances on the same physical machine to have individual World Wide Port Names. This means they can be treated as discrete entities by the network devices. In other words, the virtual machines can share a single HBA and switch port while receiving individualized network services such as zoning.

The HBA NPIV implementation virtualizes the physical adapter port, so a single physical Fibre Channel adapter port can function as multiple logical ports. In this implementation, each physical port can support up to 256 virtual ports.

NPIV I/O virtualization enables storage administrators to deploy virtual servers with virtual adapter technologies, creating virtual machines that are more secure and easier to manage.

HPE Virtual Connect Fibre Channel

HPE Virtual Connect is a set of interconnect modules and embedded software for HPE BladeSystem c-Class enclosures that simplifies the setup and administration of server connections. HPE offers the Virtual Connect 4-gigabit and 8-gigabit Fibre Channel Modules, two HPE Virtual Connect 1/10-gigabit Ethernet modules, the Virtual Connect Flex-10 10-gigabit Ethernet Module, and for management, HPE Virtual Connect Manager and HPE Virtual Connect Enterprise Manager.

Although Virtual Connect uses the standard HBAs within the server, it uses a new class of NPIV-based Fibre Channel interconnect modules to simplify the connection of those server HBAs to the data center environment.

Virtual Connect also extends the capability of the standard server HBAs by providing support for securely administering their Fibre Channel WWN addresses.

HPE Virtual Connect FlexFabric

  • Up to four physical functions for each server blade adapter network port
  • The physical function corresponds to the HBA
  • Four physical functions share the 10 Gb link
  • One of the four physical functions can be defined as the Fibre Channel HBA, and the remaining three will act
    as NICs
  • Each physical function has 100% hardware-level performance, but the bandwidth might be fine-tuned to quickly adapt to virtual server workload demands

Virtual Connect FlexFabric provides up to four physical functions for each blade-server-adapter network port, with the unique ability to fine-tune the bandwidth to adapt to virtual server workload demands quickly.

The system administrator can define all four connections as FlexNICs to support only Ethernet traffic, like with Virtual Connect.

Additionally, one of the physical functions can also be defined as a FlexHBA for Fibre Channel protocol support or as an iSCSI initiator for iSCSI boot protocol support. Each function has complete hardware-level performance and provides the I/O performance needed to take full advantage of multicore processors and to support more virtual machines per physical server.

What is Boot from SAN?

The process of booting a server using external storage devices over a SAN

  • Used for server and storage consolidation
  • Minimizes server maintenance and reduces backup time
  • Allows for rapid infrastructure changes

The process of loading installed operating system code from a storage device to the computer memory when the computer is powered on is referred to as the “boot process.” Typically, HPE ProLiant servers boot operating systems from internal SCSI, IDE, SATA, and SAS storage devices.

However, when you boot the operating system using external storage devices such as Fibre Channel HBAs and RAID arrays over a SAN instead of server-based internal boot devices, the boot process is referred to as “Boot from SAN.”

Multipath Concept

  • Multipath I/O (MPIO) provides automatic path failover between the server and the disk arrays
  • Some multipath solutions provide load balancing over multiple HBA paths

A redundant SAN design will present your host with multiple paths to the same LUN. Without multipath software, a server would see all of the paths to the LUN defined on the storage array, but it would not understand that the multiple paths lead to a single LUN. That would lead to the situation where the server showed four distinctive LUNs, instead of having single LUN through multiple paths.

A multipath server driver helps the server to sense that multiple paths are leading to the same physical device, and it enables a host to correctly present the LUN as single device.

What is Path Failover?

Failover is handled by MPIO, and it is supported via services, drivers, and agents

It is transparent to the applications

The administrator has to configure the primary and alternate paths

One of the benefits of MPIO is support for automatic path failover. Automatic path failover is initiated when there is a failure of some of the data paths.

Changes in a SAN configuration are detected by the drivers, services, and agents that are part of the MPIO solution. I/O requests that were using the failed path are redirected to the remaining functioning paths.

The whole procedure is transparent to the application running on the affected host, and all events are logged to the system event database.

What is Load Balancing?

MPIO load balancing goes across all installed HBA ports in a server to increase throughput and HBA utilization. You can configure different load balancing policies.

The availability of those options depends on the multipath software and hardware. Generally, four modes are supported: –Round robin, –Least I/O, –Least bandwidth, and –Shortest queue.  

  • MPIO provides load balancing across all installed HBAs (ports) in a server
  • There are various load-balancing policies, depending on the multipath software:
  • Round robin
  • Least I/O
  • Least bandwidth
  • Shortest queue (requests, bytes, service time)

MPIO solutions consist of two components: –Drivers developed by Microsoft and –Device-specific modules developed by storage vendors to Microsoft standards.

MPIO uses redundant physical path components to eliminate single points of failure between servers and storage. It increases data reliability and availability, reduces bottlenecks, and provides fault tolerance and automatic load balancing of I/O traffic.

Although multipathing and clustering both provide high availability, multipathing by itself does not protect against hardware or software failures because it only ensures the redundancy of cabling, adapters, and switches that are native to the Microsoft multipathing software.

Fibre Channel advanced feature

Now we will look at some of the advanced features you might find in Fibre Channel environments.

Each port in the switched fabric has its own unique 24-bit address. With this 24-bit addressing scheme comes a smaller frame header, and this can speed up the routing process. This frame header and routing logic optimizes the Fibre Channel fabric for high-speed switching of frames.

The 24-bit addressing scheme also allows for up to 16 million addresses, which is an address space larger than any practical SAN design in existence today. The 24-bit address has to be connected to the 64-bit address that is associated with World Wide Names.

Fibre Channel name and address

  • 24-bit addresses are automatically assigned by the topology to remove the overhead of manual administration
  • Unlike the WWN addresses, port addresses are not built-in
  • The switch is responsible for assigning and maintaining the port addresses
  • The switch maintains the correlation between the port address and the WWN address of the device
    on that port
  • The Name server is a component of the fabric operating system running on the switch

The 24-bit address scheme also removes the overhead of manually administering addresses because it allows the topology itself to assign addresses. This is not like World Wide Name addressing, in which the addresses are assigned to the manufacturers by the Institute of Electrical and Electronics Engineers standards committee and then built into the device, like naming a child at birth.

If the topology itself assigns the 24-bit addresses, then someone has to be responsible for the addressing scheme from WWN addressing to port addressing.

In the switched fabric environment, the switch itself is responsible for assigning and maintaining the port addresses. When a device with its WWN logs in to the switch on a specific port, the switch assigns the port address to that port, and the switch also maintains the correlation between the port address and the WWN address of the device on that port. This function of the switch is implemented by using a Name server.

The Name server is a component of the fabric operating system, and it runs inside the switch. It is essentially a database in which a fabric-attached device registers its values.

Other benefits of dynamic addressing are that it removes the potential element of human error in address maintenance and it provides more flexibility in additions, moves, and changes in the SAN.

Fiber Channel port address-1

A 24-bit port address consists of three parts:

  • The domain consists of bits from 23 to 16.
  • The area consists of bits from 15 to zero eight.
  • The port, or arbitrated loop physical address known as the AL_PA, consists of bits from zero seven to zero zero.  

Fiber Channel port address -2

What is the significance of each part of the port address?

The domain is the most significant byte of the port address. It is the address of the switch itself. One byte allows for up to 256 possible addresses, but because some of these are reserved (like the one for broadcast), only 239 addresses are actually available.

This means that you can have as much as 239 switches in your SAN environment. If you have multiple interconnected switches in your environment, the domain number allows each switch to have a unique identifier.

The area field provides 256 addresses. This part of the address identifies the individual FL_Ports that are supporting loops, or it can be used as the identifier for a group of F_Ports; for example, a card with more ports on it. This means that each group of ports has a different area number, even if there is only one port in the group.

The final part of the address identifies the attached N_Ports and NL_Ports. It provides for 256 addresses.

To determine the number of available addresses, you can use a simple calculation, where you multiply the numbers of domains, areas, and ports. This means that the total number of addresses available is fifteen million, six hundred sixty-three thousand, one hundred four, which is the product of multiplying 239 domains times 256 area times 256 ports.

Simple Name Server

  • The Name server stores information about all of the devices in the fabric
  • An instance of the Name server runs on every Fibre Channel switch in a SAN
  • A switch service that stores names, addresses, and attributes for up to 15 minutes and provides them as required to other devices in the fabric

When you are connecting a Fibre Channel device to a Fibre Channel switch, that device must register itself with that switch. This registration includes host and storage identifiers such as the device network address and a World Wide Name.

On top of this, communication parameters are also exchanged. The Fibre Channel device registers itself with a Simple Name Server, or SNS, which serves as a database for all Fibre Channel devices attached to the SAN. The Fibre Channel switches perform the SNS function.

10-bit addressing mode

The number of physical ports on the switch is limited to 256 by the number of bits in the area part of the Fibre Channel address. Director switches such as the Brocade DCX and DCX-four support Virtual Fabric, where the number of required ports might easily grow to more than 256.

A ten-bit addressing mode allows for the support of up to one thousand twenty-four F_Ports in a logical switch. This is achieved by borrowing the most significant two bits from the ALPA field of the Fibre Channel address.

Although this schema is flexible in supporting a large number of F_Ports, it also reduces the number of NPIV – Loop devices supported on a port to 64.

Arbitrated loop addressing

Fibre Channel specifies a three-byte field for the address used in routing frames. In an arbitrated loop, only one of the three bytes, containing the least significant eight bits, is used for the arbitrated loop physical address. This address is used in the Source and Destination IDs of the frames transmitted in the loop.

Of the full 24-bit address defined by the Fibre Channel standard, only eight bits are used by the ALPA. Bits eight to 23 are used for the FL_Port identifier, and the full 24 bits are used by an N_Port in a fabric switch environment.

What is Fibre Channel and Fibre Channel over Ethernet and How does it work?

In a Storage Area Network environment most IT companies use Fibre Channel and Fibre Channel over Ethernet architectures for user data access at ease.

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Fibre Channel communication can be conducted over copper coax cables, twisted pair cables, or optical fiber. This chapter describes the components used to transform electrical signals to optical signals, and vice versa, and the most common types of optical fibers. It also identifies some of the factors that lead to fiber-optic signal losses.

Table of Contents

Fibre Channel Function levels

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Fibre Channel is structured as a set of five hierarchical function levels.

  • FC-0 is the physical level that defines connectors, cables, and the electrical characteristics of transition.
  • FC-1 is the encoding level, which defines the encoding and decoding and the transmission protocol.
  • FC-2 is the signaling and framing protocol level. It determines how the data from the upper level is framed for handling by the transport level, and it incorporates the management of frames, flow control, and cyclic redundancy checks.
  • FC-3 is the common services level, which is open for future implementation.
  • FC-4 is the protocol mapping level. It is usually provided by the device drivers from the different vendors, and it establishes the interface between Fibre Channel and the upper-level protocols.

FC-0—Physical level

Defines the physical link in the Fibre Channel system

  • Transceivers
  • Connection
  • Media type

Available data rates

  • 133 Mbit/s
  • 266 Mbit/s
  • 531 Mbit/s
  • 1062 Mbit/s

The lowest architectural level defines the physical links in the system, including the fiber, connectors, optical, and electrical parameters for a variety of data rates.

The physical level is designed for the use of a large number of technologies to meet the widest range of system requirements. An end-to-end communication route can consist of different link technologies to achieve the maximum performance and price efficiency.

This section takes a closer look at the physical link components.

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

To be able to transmit data, you need transceivers. The most common way of transmitting data is to use light-based fiber optics. The use of electronic signals is the traditional and slower way of transmitting data.

The best modules to use today are the XFP and SFP transceivers. “XFP” stands for “Ten Gigabit Small Form-Factor Pluggable” and “SFP” stands for “Small Form-Factor Pluggable.”

SFP and SFP+ transceivers have the same size and appearance, but they support different standards. As a result, the less expensive SFP supports data rates up to four-point-two-five gigabits and distances up to one hundred fifty kilometers, and the SFP+ supports data rates up to sixteen gigabits and distances up to eighty kilometers.

Fibre Channel connectors

–SFP, SFP+, and XFP transceivers are compatible with the Lucent Connector (LC) type of connectors

–Cables containing LC connectors on both sides are known as LC-LC cables

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

An optical fiber connector terminates the end of an optical fiber and enables faster connection and disconnection than splicing. The connectors mechanically couple and align the cores of fibers so light can pass through. Better-quality connectors lose little light because of reflection or misalignment of the fibers. In all, about one hundred fiber optic connectors have been introduced to the market.

SFP, SFP+, and XFP transceivers are compatible with the Lucent Connector types of connectors. Cables containing LC connectors on both sides are known as LC-LC cables.

Fibre Channel cabling

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Although it was initially designed for use with fiber-optic cable, Fibre Channel works well at shorter distances compared to copper cable in installations like storage area networks. In fact, the specification lists several different types of copper media that can support Fibre Channel.

The most common form of copper for Fibre Channel is shielded, twisted-pair cabling using DB-9 connectors—what looks like shielded telephone wire.

However, it is important to understand that copper cable for Fibre Channel needs to meet higher performance standards than conventional telephone wire. Properly specified and installed copper cable works fine for shorter distances, such as within a building, at speeds up to one hundred megabytes per second.

Common optical (glass fiber) cable types include:

  • Sixty-two-point-five-micron multimode,
  • Fifty-micron multimode, and
  • Nine-micron single-mode.

Multimode Fiber

  • Multiple streams of light to travel different paths
  • Most popular for networking
  • Fibre Channel uses single wavelength

–Example: 850 nm –

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Multimode uses a shortwave laser to emit many different light modes. These reflect off the cable cladding at different angles, which causes dispersion. This dispersion reduces the total distance from which the original signal can be reclaimed.

Multimode fiber has a larger core than single-mode fiber. The larger the core, the greater the dispersion factor, hence the reduction in the distance that data, or light, can travel.

Single-mode Fiber

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Single-mode is an optical fiber with a core diameter of less than ten microns. Used for high-speed transmission over long distances, it provides greater bandwidth than multimode fiber, but its smaller core makes it more difficult to couple the light source.

Increasingly, single-mode fiber is being used for shorter distances. When single-mode fiber is used in shorter distances, such as a campus or metropolitan area network, step-index fiber is used. For longer distances and for transmitting multiple channels, such as with WDM, dispersion-shifted fiber is used.

Single-mode step-index fiber

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

When moderate-distance transmission cannot be accomplished with multimode fiber and inexpensive multimode light sources, single-mode fiber is used. This type of fiber is most commonly used in private network, campus, and building applications.

Single-mode fiber is designed for use at both the one-thousand-three-hundred-ten-nanometer and at the one-thousand-five-hundred-fifty-nanometer wavelength windows.

Because the one-thousand-three-hundred-ten-nanometer lasers and detectors are less expensive than one-thousand-five-hundred-fifty-nanometer devices, most of these short-to-moderate distance applications use the one-thousand-three-hundred-ten-nanometer wavelength.

Single-mode fiber is the least expensive fiber available, and is optimized for the lowest dispersion at one-thousand-three-hundred-ten nanometers. It offers the best combination of cost and performance for most short-to-moderate distance private network, campus, and building applications when distances exceed multimode limits.

The information-carrying capabilities of the single-mode fiber are infinite. Single-mode fiber supports speeds of tens of gigabits per second and can carry many gigabit channels simultaneously. Each channel carries a different wavelength of light without any interference.

Fiber-optic class signal loss — Attenuation

Attenuation

  • The reduction in power of the light signal as it is transmitted
  • Caused by passive media components such as cables, cable splices, and connectors

The correct functioning of an optical data link depends on modulated light reaching the receiver with enough power to be demodulated correctly. “Attenuation” is the reduction in the power of the light signal as it is transmitted.

Attenuation is caused by passive media components such as cables, cable splices, and connectors. Although attenuation is significantly lower for optical fiber than for other media, it still occurs in both multimode and single-mode transmissions.

An efficient optical data link must have enough light available to overcome attenuation.

Fiber-optic class signal loss — Dispersion

Dispersion

  • Spreading of the signal over time
  • Two types of dispersion can affect an optical data link:
  • Chromatic dispersion —Resulting from the different speeds of light rays
  • Modal dispersion—Resulting from the different propagation modes in the fiber
https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Dispersion is the spreading of the signal over time. Two types of dispersion can affect an optical data link:

  • The first type is “chromatic dispersion,” which refers to the spreading of the signal that results from the different speeds of the light rays.
  • The second type is “modal dispersion,” which refers to the spreading of the signal because of the different propagation modes in the fiber.

For multimode transmission, modal dispersion, rather than chromatic dispersion or attenuation, usually limits the maximum bit rate and link length.

For single-mode transmission, modal dispersion is not a factor; however, at higher bit rates and over longer distances, chromatic dispersion limits the maximum link length.

An efficient optical data link must have enough light to exceed the minimum power that the receiver requires to operate within its specifications.

When chromatic dispersion is at the maximum allowed, its effect can be considered as a power penalty in the power budget.

The optical power budget must allow for the sum of component attenuation, power penalties —including those from dispersion, and a safety margin for unexpected losses.

Cable bending and damage

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Bending is one of the primary causes of increases in attenuation in optical fibers. Two types of bending are macro bending and micro bending.

The macro bend has a much larger bend diameter than the fiber diameter. Here, the fiber coating has almost no impact on the optical loss because the light is guided in the core, far from the coating.

The coating cannot protect the glass (core and cladding) from being bent because the bend diameter is much larger than the fiber. 

The situation is the opposite for micro bending. Here the bending is local and the coating can protect the glass from external forces applied on the coating surface.

For this reason, many fibers have a two-layer acrylate coating, where the inner layer is soft and can accommodate for external forces acting on the fiber.

Fibers with a thin and hard coating such as polyimide do not have this protection from local bending and must be handled more carefully to avoid micro bending of the glass.

Fiber Channel-1 coding layer

FC-1 8b/10b encode/decode

  • FC-1 defines the transmission protocol including:
  • Serial encoding and decoding rules
  • Special characters – Error control
  • The information transmitted over a fiber is encoded 8 bits at a time into a 10-bit transmission character

Also used in:

  • PCI Express
  • IEEE 1394b
  • Serial ATA
  • SSA –Gigabit Ethernet
  • Infiniband

FC-1 defines the transmission protocol, including serial encoding and decoding rules, special characters, and error control. The information transmitted over a fiber is encoded eight bits at a time into a ten-bit transmission character.

The primary reason for using a transmission code is to improve the transmission characteristic of information across a fiber. The transmission code must be DC balanced to support the electrical requirements of the receiving units.

FC-2 signaling protocol level

The transport mechanism of fiber channel

  • Framing rules
  • Payload
  • Service classes and controlled mechanisms
  • Management of the data transfer sequence

Building Blocks

  • Ordered sets
  • Frames
  • Sequences
  • Exchanges

The basic building blocks of a Fibre Channel connection are the frames. The frames contain the information to be transmitted (the payload), the addresses of the source and destination ports, and the link control information. Frames are broadly categorized as data frames and link-control frames.

A sequence is formed by a set of one or more related frames transmitted unidirectionally from one N_Port to another. Each frame within a sequence is uniquely numbered with a sequence count. Error recovery, controlled by an upper protocol layer, is usually performed at sequence boundaries.

An exchange is composed of one or more non-concurrent sequences for a single operation. Exchanges can be unidirectional or bidirectional between two N_Ports.

Within a single exchange, only one sequence can be active at any time, but sequences of different exchanges can be concurrently active.

FC-3 common services

  • The FC-3 layer covers functions that can span multiple N-ports
  • FC-3 defines the common services necessary for the higher-level capabilities
  • FC-3 provides features such as:
  • Port striping –RAID –Virtualization
  • Compression
  • Encryption
  • Hunt groups
  • Multicast

The FC-3 level of the Fibre Channel standard is intended to provide the common services required for advanced features such as striping, hunt groups, and multicast.

  • Striping refers to multiplying the bandwidth by using multiple N_Ports in parallel to transmit a single information unit across multiple links.
  • Hunt groups refers to the ability for more than one port to respond to the same alias address. This improves efficiency by decreasing the chance of reaching a busy N_Port.
  • Multicast delivers a single transmission to multiple destination ports. This includes broadcasting to all N_Ports on a fabric and sending to only a subset of the N_Ports on a fabric.

FC-4 ULP mappings

  • Each upper-level protocol supported by the Fibre Channel transport requires a mapping for its Information Units to be presented to the lower levels for transport
  • The FC-4 layer provides these mappings for:
  • SCSI-3
  • IP
  • High-Performance Peripheral Interface (HIPPI)
  • FC-AV—A high-bandwidth video link for video networks, up to 500m
  • FC-VE—Fibre Channel Virtual Interface Architecture
  • FC-AE—Fibre Channel Avionics Environment
  • Ficon, IEEE 802.2 LLC, ATM, Link Encapsulation, SBCCS, IPI
  • A Fibre Channel SAN is almost exclusively concerned with using the SCSI-3 mapping

Each upper-level protocol supported by Fibre Channel transport requires a mapping for its information units to be presented to the lower levels for transport.

A Fibre Channel SAN uses the SCSI-3 mapping almost exclusively.

What is Fibre Channel over Ethernet?

Fibre Channel over Ethernet is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This chapter describes FCoE and explains the benefits of using Converged Network Adapters, which combine the strengths of the Fibre Channel and Ethernet protocols in modern data centers.

What is Fibre Channel over Ethernet?

  • Fibre Channel over Ethernet is a mapping of Fibre Channel over selected full-duplex IEEE 802.3 networks
  • The goal is to provide I/O consolidation over Ethernet, reducing network complexity in the data center
  • Customer benefits of a unified fabric:
  • Fewer NICs, HBAs, and cables
  • Lower capital expenditures and operating expenses
https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Fibre Channel Over Ethernet transports the SCSI storage data used in Fibre Channel networks. It uses the Fibre Channel Protocol stack instead of the TCP/IP stack, and it uses the Ethernet infrastructure, which has the NICs, cables, switches, and so on. The goal is to provide I/O consolidation over Ethernet, reducing network complexity in the data center.

Customer benefits of using a unified fabric include needing fewer NICs, HBAs, and cables, and lowering the capital expenditures and operating expenses.

Fibre Channel Over Ethernet I/O consolidation

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

I/O consolidation enables Ethernet and Fibre Channel to share the same physical cable and still maintain protocol isolation. It also enables you to use and configure the same type of hardware for either network.

Although being simple in concept, this configuration is complex. But the benefits from this idea are significant.

  • By leveraging I/O consolidation, that is, by using a combined network interface card and HBA, organizations free up slots, providing a multifunction network and SAN.
  • The reduced number of cards reduces power consumption, which in the case of PCI Express is twenty-five watts per card.
  • There is also a reduced number of switch ports.
  • Less power is consumed in the cooling process, which is the primary barrier to data center expansion and a cause of inefficiency at the present time.

Another advantage of I/O consolidation is that it will give enterprise organizations the means to simplify their cable management. At the moment, twenty gigabits of bandwidth can be provided by two four-gigabit Fibre Channel connections and twelve one-gigabit Ethernet connections.

Fibre Channel and Ethernet can be combined using two ten-Gigabit-Ethernet cables. This maintains the bandwidth but reduces the number of cables being managed by seventy-five percent. This also results in fewer points of management that administrators have to control.

Fibre Channel Over Ethernet mapping

  • Fibre Channel Over Ethernet maps the Fibre Channel commands and data directly into Ethernet frames to create Fibre Channel Over Ethernet
  • Fibre Channel frames are encapsulated in Ethernet frames
  • The mapping is 1:1, meaning there is no segmentation or compression of the Fibre Channel frames
https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

Fibre Channel Over Ethernet maps the Fibre Channel commands and data directly into Ethernet frames. The the mapping is one-to-one, meaning there is no segmentation or compression of the Fibre Channel frames.

But Ethernet is a lossy medium. It provides a single best-effort pipe that drops packets during a network congestion. So in Fibre Channel Over Ethernet , Fibre Channel is encapsulated and run over a lossless Ethernet infrastructure.

Fibre Channel Over Ethernet lossless Ethernet Infrastructure

  • Fibre Channel over Ethernet has to create a lossless Ethernet environment to ensure the reliability of large-scale data transportation
  • Two standards enable lossless Ethernet
  • Data Center Bridging (DCB) –Converged Enhanced Ethernet (CEE)
  • In addition to DCB and CEE, the standard introduces three enhancements to
    the Ethernet to make it lossless:
  • Priority Flow Control (IEEE 802.1Qbb)
  • Congestion Notification (IEEE 802.1Qau)
  • Enhanced Transmission Selection (IEEE 802.1Qaz)

FCoE has to create a lossless Ethernet environment to ensure the reliability of large-scale storage data transportation. The two standards that enable this are Data Center Bridging and Converged Enhanced Ethernet.

A few of the enhancements to make Ethernet lossless are listed in this slide.

Priority Flow Control

Priority Flow Control (IEEE 802.1Qbb)

  • IEEE 802.1Qbb is an enhanced QoS service
  • Traffic is classified in 8 lanes, each of which could be assigned a priority level
  • Priority Flow Control issues a “Pause” command to manage and prioritize traffic when there is congestion
  • The administrators can create lossless (virtual) lanes for FCoE traffic and lossy (virtual) lanes for normal IP traffic

The Institute of Electrical and Electronics Engineers had defined the means to categorize traffic according to its priority in the Quality-of-Service standard IEEE 802.1p.

So the new standard IEEE 802.1bb takes advantage of the earlier standard by first classifying the traffic into eight “lanes,” each of which can be assigned a priority level.

Priority Flow Control issues a Pause command that halts FCoE traffic during congestion so the losses can be minimized. It uses the priority level to distinguish FCoE traffic from other types of traffic. This means that administrators can create lossless virtual lanes for FCoE traffic and lossy virtual lanes for normal IP-based traffic.

Congestion Notification

Congestion Notification (IEEE 802.1Qau)

  • Congestion is measured at the congestion point, but link rate limiting is taken at the point of origin
  • Example: An aggregation switch can ask an edge switch to stop (or limit) its traffic from a particular port, if congestion occurs

Congestion is measured at the congestion point in the network, wherever it is happening, but the action is taken at the reaction point, which is the originating point.

For example, an aggregation switch can ask an edge switch to stop or limit its traffic from a particular port if congestion is encountered.

Enhanced Transmission Selection

Enhanced Transmission Selection (IEEE 802.1Qaz)

  • High-priority traffic such as FCoE is allocated with a minimum guaranteed bandwidth
  • If the FCoE traffic does not fully utilize its reserved capacity, the extra bandwidth can be used by other types of traffic, and this can be controlled dynamically

High-priority traffic like FCoE can be allocated with a minimum guaranteed bandwidth so that all the other traffic on the network does not congest the path with its high volumes.

However, if the FCoE traffic does not fully utilize the path, its “reserved capacity,” then the extra bandwidth can be used by other types of traffic. The protocol can control this dynamically.

Fibre Channel over Ethernet Components

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

An FCoE configuration includes several components.

In this diagram, the first key component is the Converged Network Adapter. The CNA is a single adapter in the server that attaches to PCI Express slot. It can provide the functionalities of both Ethernet NICs and Fibre Channel HBAs virtually.

That means the server still sees two interfaces, and it sends the IP traffic to the NIC and the Fibre Channel traffic to the HBA. But the CNA collects from both of them and transports the data over a single Ethernet cable, after wrapping all the Fibre Channel frames to Ethernet frames.

The second key component in the diagram is the FCoE link. The FCoE infrastructure uses the same Ethernet infrastructure as the TCP/IP network. It uses UTP copper cables, optical fiber cables, and even the low-cost cables that use the SFP+ interface to carry ten-Gigabit-Ethernet over short distances.

The third component identified in the diagram is the set of FCoE switches and network switches that support the FCoE protocol. Fibre Channel SANs only understand the Fibre Channel protocol and only recognize Fibre Channel interfaces, so there needs to be an intermediary that separates the FCoE traffic from the regular IP traffic and that connects to the Fibre Channel SANs directly.

This intermediate functionality is provided by FCoE switches or network switches with Fibre Channel ports that support the FCoE protocol. The HBAs from the servers connect to the FCoE switch, which in turn connects to the SAN network using Fibre Channel ports and to the IP network using IP ports.

FCoE advantages and limitations

https://technoworldnow.com/what-is-fibre-channel-and-fc-over-ethernet/

What are the advantages of FCoE?

  • FCoE simplifies the network by reducing the two individual cables from each server and the two network adapters, which are the HBA for storage connectivity and the NIC for computer network connectivity, to just one.
  • FCoE can carry traffic over the Ethernet medium and uses the familiar and easily available copper UTP cables and optical fiber cables.
  • FCoE uses one network adapter instead of two, which results in some power savings for the server.
  • Some I/O virtualization solutions support FCoE, which enables you to reduce the total number of server adapters for a group of servers by consolidating them onto an I/O virtualization appliance and allowing the servers to share the common pool of adapters. The servers themselves connect to the I/O virtualization appliance through interfaces like PCI Express and the appropriate cables from there. You should note that certain proprietary, vendor-based drivers might have to be installed to complete this setup.
  • The performance of an FCoE network is comparable to that of Fibre Channel and IP networks, with FCoE currently supporting the speeds of Ethernet networks up to one, ten, or more gigabits per second. This speed is expanding to forty gigabits per second and one hundred gigabits per second.
  • FCoE can be used in virtualized environments (server virtualization) and is quite advantageous in such circumstances. –FCoE, unlike iSCSI, is a reliable storage transportation protocol. It can scale up to thousands of servers.
  • Because FCoE encapsulates the Fibre Channel data onto Ethernet frames for transportation only, all the existing administration tools and workflows for Fibre Channel remain intact.

Hence, the existing investment in Fibre Channel storage is preserved and the reliability of Fibre Channel is also maintained. The support for FCoE from network switch vendors strengthens the case of FCoE. These vendors are offering converged switches with both Ethernet and Fibre Channel ports.

Some disadvantages and limitations of FCoE include:

  • The only Ethernet component that is currently compatible with FCoE is the cables. Everything else has to change to implement FCoE. This means that the actual savings would only be the amount and cost of cables.
  • The cost of a Unified CAN, although it is coming down, might be more than the cost of the HBA and NIC combined.
  • FCoE is currently restricted to access networks only (server-to-switch connections). The distribution and core storage networks are still in Fibre Channel and will continue to be in Fibre Channel until the FCoE technology matures enough that its own FCoE SAN networks can be created.
  • iSCSI proponents might still argue that changing one disparate network into another does not amount to convergence of the storage and network infrastructures. –Security on FCoE networks might have to be re-evaluated because the network is now running over Ethernet, which is more easily accessed than Fibre Channel.

What is SAN, NAS and DAS? Difference Explained | technoworldnow

In an enterprise storage technology, we mostly have three options to choose from – which are Storage Area Network (SAN), Network Attached Storage (NAS) and Direct Attached Storage (DAS). All these three have their own advantage and disadvantages.

The Top Data Center Operators In India

Options for connecting computers to storage have increased dramatically in a short time. This chapter introduces the major storage networking variations: –Direct attached storage, or DAS, –Network-attached storage, or NAS, and –The storage area network, or SAN.

DAS, NAS and SAN

What is SAN, NAS and DAS? Difference Explained | technoworldnow

Businesses can choose among three storage architectures to suit their requirements. Each architecture has certain advantages and disadvantages:

– DAS is a storage device with a dedicated, parallel connection to a server, typically using SCSI.

– NAS storage devices connect directly to the LAN through an Ethernet port. LAN devices use TCP/IP to communicate with their network peers.

– A SAN is a dedicated network that provides storage to enterprise servers. It is typically configured using switches and Fibre Channel connections.

Direct Attached Storage

– The traditional method of locally attaching storage to servers through a dedicated SCSI communication channel between the server and storage

– Storage for each server is managed separately and cannot be shared

– DAS supports disk drives, a RAID subsystem, or another storage device

DAS is the traditional, non-networked method of locally attaching storage to servers through a dedicated communication channel between the server and storage.

The server typically communicates with the storage subsystem using a block-level interface. The file system resides on the server and determines which data blocks are needed from the storage device to complete the file request from an application.

Network-Attached Storage

– NAS provides a file-level access to storage systems

– NAS devices are: –Server-independent

– Used to off-load storage traffic to a single, dedicated storage device

What is SAN, NAS and DAS? Difference Explained | technoworldnow

NAS servers provide a file-level interface to storage subsystems. Because NAS devices are server-independent, they complement and help ease the burden of overworked file servers by off-loading storage to a single-purpose, dedicated storage device. NAS devices have an operating system that is optimized for file sharing and does not run general server applications, eliminating the major cause of downtime.

NAS devices are perfect for storing unstructured data such as files that are manually created by users.

Fibre Channel Storage Area Network

Dedicated network that provides access to consolidated, block-level data storage

– Special switches are used to connect storage arrays with servers and with each other

– Network communication uses the Fibre Channel protocol, which was specially developed for the transport of files

– This protocol is reliable, with speeds up to 16 Gbit/s

– FC SAN components allow for high levels of redundancy and resiliency

What is SAN, NAS and DAS? Difference Explained | technoworldnow

The need for a separate network for storage purposes only was evident toward the end of the nineties. The new storage area network infrastructure consisted of its own cabling and further development of the SCSI protocol. SCSI was already being used for the connection of devices such as storage arrays or printers to a server. The new development became known as Fibre Channel.

The Fibre Channel protocol was specially developed for the transport of files. It is said to be reliable, and it recently even outperformed Ethernet, with a transport speed of sixteen gigabits per second.

By design, a SAN should provide redundancy and resiliency: –Redundancy is the duplication of components up to and including the entire fabric to prevent a failure of the total SAN solution. –Resiliency is the ability of a fabric topology to withstand failures.

What should you know before – SAN Consideration?

When designing SAN solutions, consider the following:

–Scalability (number of FC ports and expansion capability)

–Storage capacity, efficiency, and cost

–Availability of the fabric, systems, and data

–Performance

–Remote replication of data

When planning and operating a SAN, you need to consider several factors.

–First, a SAN allows for great scalability, but increasing the size of a solution increases its price and complexity. You should consider any future expansion requirements in terms of the number of ports, connected systems, and arrays.

–Second, storage capacity, efficiency, and cost should be balanced to properly match the solution.

–Third, the availability of the fabric, systems, and data should be considered at an early stage of the SAN design. A SAN is often used to achieve no-single-point-of-failure configurations.

Generally, a SAN outperforms NAS and DAS solutions, but the SAN solution should be carefully balanced for the optimal performance.

A SAN plays a crucial role in keeping a business running by providing protection from unpredictable events such as natural disasters or complete site failures. SANs provide the tools, methods, and means to replicate data from a primary site to a secondary, remote site.

Comparing SAN and NAS

What is SAN, NAS and DAS? Difference Explained | technoworldnow.com

The major difference between a SAN and NAS is that a SAN is a separate network, away from the company LAN. The SAN is configured to allow servers to communicate with storage arrays, typically using Fibre Channel. NAS requires a dedicated storage device, typically an optimized server with a number of RAID storage drives that are attached directly to the network.

Both options have their strengths and weaknesses, with the primary advantages of a SAN being the major weakness of a NAS solution, and vice versa.

The benefits of SANs include network speed, reliability, centralization, and data protection.

The main strengths of NAS are interoperability, a lower total cost of ownership, and its relative simplicity.

Comparing DAS, NAS and SAN

Note the position of the network in this diagram. In the case of DAS and NAS, the file system resides at the level of the storage. Based on the fact that data is visible in the form of the file system, NAS is good for sharing files between devices and operating systems. File system features make it easy to assign access permissions to the stored files.

In contrast to NAS and DAS, a SAN works at the block level. The file system is created and maintained by the operating system. To the operating system, the storage space that is accessible through the SAN looks like a regular block device such as an internal hard drive or a tape device.

How to choose between SAN, NAS and DAS?

What is SAN, NAS and DAS? Difference Explained | technoworldnow.com

When helping your customer to decide if they should use SAN, NAS, or DAS, it is important to focus on their specific storage needs and their long-term business goals. One of the key criteria to consider is capacity. This is the amount and “type” of data, either file level or block level, that needs to be stored and shared.

Other criteria to consider are:

–The I/O and throughput requirements for performance,

–The scalability and long-term estimates for data growth,

–The storage availability and reliability, especially for mission-critical applications,

–The data protection needed as well as the backup and recovery requirements,

–The quantity and skill level of the available IT staff and resources, and

–Any budget concerns of the customer.

Tiered storage is essentially the assignment of different categories of data to different types of storage devices. These categories can be based on the levels of protection needed, the performance requirements, the frequency of use, the cost, and other considerations that are unique to the business.

The data in a tiered-storage configuration can be moved from high-cost to low-cost storage media, from slow to fast storage media, or from archive to near-online or online storage media.

What are Storage Area Network (SAN) Components?

The physical components of a storage area network can be grouped in a single rack or data center, or they can be connected over long distances. Servers do not provide SAN connectivity out-of-the-box. To connect to the SAN, a server needs a host bus adapter.

This chapter discusses the basic SAN components and their boot order.

Identifying SAN Components

–Host

–Servers

–HBAs

–Fabric

–Hubs or switches

–Routers

–SAN software

–Fibre Channel cables

–Storage

–Storage devices

–Backup devices

Fibre Channel SAN environments enable the development of solutions that provide high performance and high availability, which are the fundamental requirements of a storage network.

Fibre Channel devices effectively combat the bandwidth-related problems that generally occur during bulky operations such as backup and restore operations.

A wide range of hardware and software products comprise a SAN. The hardware components offer different features to provide for a range of SAN sizes, from a small SAN to a high-speed, high-volume data center SAN.

The common SAN components are used in four layers:

–The client layer contains the client systems that are using the storage services.

–The host layer includes the servers with their host bus adapters.

–The fabric layer includes Fibre Channel hubs or switches, routers, SAN software, and Fibre Channel cables.

–And the storage layer includes storage and backup devices.

Host Component (Initiator)

Host components consist of servers and other devices that enable servers to connect to the SAN. Generally, servers do not have Fibre Channel ports. Hardware devices that provide the Fibre Channel port and perform digital-to-optical signal conversion are called host bus adapters. HBAs are available in the form of PCI cards for rack-based servers and mezzanine cards for server blades. HBAs often provide more than one Fibre Channel port for SAN connectivity.

The operating system requires the appropriate drivers to support the HBA. HBA drivers are not universal; each hardware manufacturer provides its own drivers for the operating systems its devices support.

The software component that is used to aggregate throughput, provide load balancing, and enable failover in the case of a communication failure is called multipath software. On a Microsoft Windows platform, that software is Microsoft Multipath I/O, or MPIO for short.

HBAs

Servers typically do not have Fibre Channel connectivity embedded. To connect servers to the SAN, you must use dedicated hardware called the host bus adapter.

Fibre Channel HBAs are similar to the network interface cards used in LANs and other non-SAN networks. They replace the traditional SCSI cards used to connect SAN devices such as servers and storage.

HBAs can come in the form of a PCI card for rack- or tower-based servers or a mezzanine card for high-density server blades.

Disk Arrays (Target)

Disk arrays are considered to be targets in a SAN. To communicate over the SAN, disk arrays are equipped with dedicated connection points called “ports.” To increase availability and enhance performance, disk arrays come with a minimum of four Fibre Channel ports.

Disk arrays are designed and built to run for long periods that is measured as the “uptime.” The most advanced disk arrays can achieve up to five-nines of uptime during the year, which translates to a little more than five minutes of downtime.

Disk arrays are usually connected to an uninterruptable power supply to protect the system from power outages. But even if the UPS fails, disk arrays are usually equipped with a dedicated battery that preserves the cache content when a power outage occurs. When electricity becomes available again and the disks start spinning, the controllers flush the cached data to the hard drives, preserving the data integrity.

To ease the management and administration of a large number of drives, storage array virtual drive images can be “frozen in time” as snapshots, or seamless copies of those virtual images can be made through cloning. Modern disk arrays can work with hundreds of these snapshots and clones without a performance penalty.

Although disk arrays provide high levels of data availability within a rack, they cannot protect that data from extreme events such as natural disasters or complete site failures.

Other technologies are available to replicate the data to remote locations under those conditions. Disk arrays are designed to facilitate seamless and reliable replication of data over long distances to provide data integrity and disaster recovery.

Interconnect Devices

A Fibre Channel switch is a network switch that is compatible with the Fibre Channel protocol. These switches can be combined to create a fabric that allows many-to-many communication while maintaining throughput and providing redundancy with minimal latency.

Two types of Fibre Channel switches are available:

–Fabric switches are predominantly used to implement the switched fabric topology.

–Directors are the most expensive types of switches, but they offer the best performance and maximum reliability. The average annual downtime for a director is barely five minutes.  

What is SAN Boot Order?

To properly boot SAN components, apply the following boot order:

–First, power on the SAN fabric and wait for the switches to finish booting. If you do not wait for the boot process to finish, the fabric login attempts might be denied.

–Second, power on the storage array and wait for the disk array ports to log in to the fabric.

–Third, boot the host systems, and verify that your target drives are visible.

To shut down a SAN configuration, complete these steps in the opposite order.

Also Learn:

What is a Hard Disk Drive (HDD)? Disk Technology

How to Add iSCSI Storage to Datastore in Vmware ESXi 5.5

How to install HPE 3par Virtual Service Processor 5.0

How to backup NDMP Filer (NetApp Storage) in Backup Exec 20

What is DiskPool in NetBackup?

What is multipathing (SAN multipathing)?

How SANs increase availability and utilization?

What is a Hard Disk Drive (HDD)? Disk Technology explained

In a Storage Area Network (SAN), there are multiple components used to form a Storage Area Network and Storage array is one of the most important parts of it.

What is a Hard Disk Drive (HDD)? Disk Technology explained

A hard drive, also known as a hard disk drive or HDD, is a fundamental part of modern computers. Functioning as an internal storage device, the HDD enables a computer to house and execute important files and programs.

At HPE, the hard drive family is divided to three categories: Entry, Midline, and Enterprise. These categories meet the needs of different environments for performance, reliability, and a healthy ratio of cost and capacity.

What is a Hard Disk Drive (HDD)? Disk Technology explained

Entry drives have the lowest unit cost and give you a basic level of reliability and performance. They are best suited for non-mission-critical environments where I/O workloads are forty percent or less. They are also used for internal and archival storage or as boot drives for entry-level servers. Entry drives are only available with a Serial ATA, or SATA interface.

Midline drives give you larger capacity and greater reliability than Entry drives. Midline drives are more resistant to rotational and operational vibration, so they are better suited for use in multiple-drive configurations. Midline drives are available with both SATA and Serial Attached SCSI interfaces. Serial Attached SCSI is typically shortened to “SAS”.

Enterprise drives give you maximum reliability, the highest performance, scalability, and error management under the most demanding conditions. They are the only HPE drives designed for use in unconstrained I/O workloads. They are for mission-critical applications such as large databases, email servers, and back-office.

There are multiple types of Hdd and Disk technologies available, below are the widely used disk technologies:

Characteristics of drives

Form factor

– Small form factor (SFF) —2.5-inch

– Large form factor (LFF)—3.5-inch

Drive capacity

– Depends on number of platters the drive contains, the surface area of each platter, and the areal density
(the number of bits that can be stored per unit area)

– Expressed in gigabytes

Disk drive performance

– Depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read/write bandwidth, the queuing strategies, and the interface technologies

Reliability

– Measured in terms of Annual Failure Rates (AFRs)

The basic characteristics of industry-standard drives are form factor, drive capacity, performance, and reliability.

– Regarding the form factor, HPE drives for servers are available in a two-point-five-inch small form factor and a three-point-five-inch large form factor. In general, SFF drives give you greater power and space savings. These drives can require as little as half the power and generate significantly less heat than LFF drives. LFF drives are better suited for implementations that require large, single-drive capacities and a lower cost per gigabyte.

– Drive capacity depends on the number of platters the drive contains, the surface area of each platter, and the areal density (the number of bits that can be stored per unit area).

– Disk drive performance depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read-write bandwidth, the queuing strategies, and the interface technologies.

– Drive reliability is measured in terms of Annual Failure Rates. The AFR is the percentage of drive failures occurring in a large population of drives operating for one year. With an AFR of one-point-five percent, one hundred thousand drives would experience approximately fifteen-hundred failures per year.

–Keep in mind that an AFR calculated from a small number of drives would be subject to large statistical variations that make it less reliable than an AFR from a larger sample.

Drive interconnect technologies

The technology to connect one or more drives to a computer system has transitioned from parallel bus data interfaces to serial interfaces

Parallel interfaces:

–ATA—Advanced Technology Attachment

–IDE—Integrated Drive Electronics, also called PATA, Parallel Advanced Technology Attachment

–SCSI—Small Computer System Interface

Serial interfaces:

–SATA—Serial ATA

–SAS—Serial Attached SCSI

The technology used to connect one or more drives to a computer system has transitioned from parallel bus data interfaces such as Advanced Technology Attachment, Integrated Drive Electronics, and the original SCSI interface to the SATA and SAS serial interfaces.

Each drive with a SATA or SAS interface has its own high-speed serial communication channel to the controller.

Parallel SCSI

–A SCSI standard established by ANSI in 1986, but still evolving

–The Common Command Set (CCS) was developed in parallel with the ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4 standards

–The SCSI-1 standard was too permissive and allowed too many vendor-specific options

–The result was incompatibility between products from different vendors, which made for confusion on: –Speed and feed: Fast, Ultra, Ultra2, narrow, and wide –Command sets: Common Command Set, Enhanced Command Set –Termination: Passive, Active, Forced Perfect Termination

–Ultra320 and Ultra640 (AKA Fast-320) are the last offerings

In addition to a physical interconnection standard, the Small Computer System Interface, or SCSI, defines a logical command set standard that all drive devices must adhere to. The Common Command Set was developed in parallel with ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4, which include the revised CCS as part of the standard. The commands depend on the type of device being used.

SCSI-1 initially defined command sets for six device types; however, the standard was too permissive and allowed too many vendor-specific options. The result was incompatibility between products from different vendors.

A CCS was defined to solve the SCSI compatibility issues. It was a subset of the standard and did not allow for exceptions. With the CCS, SCSI-1 began to penetrate the server disk subsystem and tape backup market in the late 1980s.

SCSI-2 targeted the drawbacks of SCSI-1 and introduced support for 10 device types. SCSI-2 also introduced more efficient command sets that improved functionality by including disconnect and command queuing options.

Serial ATA (SATA)

–Hot-plug and Native Command Queuing (NCQ) support

–Transfer rates up to 300 MB/s for SATA2 and 600 MB/s for SATA3, using half-duplex

–SATA3.1 introduced support for Solid State Disks (SSD) and the Zero-Power Optical Disk Drive

–SATA3.2 combines SATA commands with the PCI Express interface to achieve device speeds
up to 16 Gb/s

–Mean Time Before Failure (MTBF) is 1.2 million hours

The Serial ATA, or SATA, standard is a direct replacement for the older Advanced Technology Attachment standard. Compared to ATA, the SATA interface offers a reduced cable size, with only seven conductors rather than the 40 conductors required by the ATA standard, as well as hot-pluggable drives and faster and more efficient data transfer rates through the optional I/O queuing protocol called Native Command Queuing, or NCQ.

The SATA3.1 standard introduced support for Solid State Drives and a Zero-Power Optical Disk Drive. The Zero-Power Optical Disk Drive reduces the power consumption of SATA optical disk drives to zero when the device is idle, preserving the energy.

To further increase the transfer speeds, SATA3.2 combines the SATA commands and the PCI Express interface to boost the maximum theoretical data speed to 16 gigabits per second, compared to the 6 Gb/s that is available on current drives.

Serial Attached SCSI

–SAS uses the full-duplex architecture, effectively doubling the transfer speeds

–The current SAS standard provides speed of 12 Gb/s, with a maximum theoretical speed of 16 Gb/s

–The maximum number of attached devices is 128 (compared to 16 for Parallel SCSI)

–A single SAS domain can address up to 65,535 devices using a fanout expander –The MTBF is increased to 1.6 million hours

Serial ATA uses a half-duplex, serial connection to devices rather than the original parallel connection of ATA. SATA still uses the ATA command set, which is simpler but provides less robust functionality than the SCSI interface used with SAS.

The SATA interface has gone through three major generations. –The one-point-five gigabits per second version was targeted at replacing ATA in the desktop and consumer markets. –The one-point-five gigabits per second version with extensions was targeted for workstations and low-end servers.

This generation added native command queuing. –The three gigabits per second version was targeted for workstations and low-end servers. This generation increased the data transfer rate.

SATA is the best solution for price-sensitive, low-I/O-workload applications, and it dominates the desktop market because of its low cost and the lighter workloads of desktops.

In contrast, Serial Attached SCSI uses a point-to-point, full-duplex serial connection and the SCSI command set, which has more performance and reliability features than the ATA command set. For example, SAS devices can be dual-port.

This enables the device to access the full bandwidth of a SAS link. These additional features come at a cost, however. SAS devices are more expensive than SATA devices for the equivalent storage capacity.

The first-generation SAS supported a link speed of three gigabits per second. The current generation supports a link speed of up to six gigabits per second, or six hundred megabytes per second, in each direction.

SAS is the best solution for mission-critical, high-I/O-workload applications.

Near-line SAS

A SATA drive using a SAS interface is called near-line SAS. It provides all of the enterprise features that come with SAS, but still has the limitations of SATA for disk performance and the mean time before failures.

What is Native Command Queuing (NCQ)?

–NCQ is a technology designed to increase the performance of SATA drives.

–Disks are enabled to internally optimize the order in which read/write commands are executed.

–NCQ is reducing the amount of unnecessary HDD head movement. NCQ is supported on the HPE Smart Array P400, P400i, E500, and P800 disk controllers.

NCQ is a technology designed to increase the performance of SATA hard disk drives by allowing an individual hard disk to internally optimize the order in which received read and write commands are executed. Without NCQ, a drive has to process and complete one command at a time. NCQ increases performance for workloads where multiple simultaneous read and write requests are outstanding, by reducing the amount of unnecessary back-and-forth on the drive heads. This most often occurs in server and storage applications.

For NCQ to be enabled, it must be supported and turned on in the controller, and in the hard drive itself.

NCQ performance gains

What is a Hard Disk Drive (HDD)? Disk Technology explained

What is a Hard Disk Drive (HDD)? Disk Technology explained

NCQ provides 8.8 percent faster performance in generic PC HDD read throughput and 9 percent faster performance in generic PC applications over non-NCQ systems.

What are SAS Domains?

What is a Hard Disk Drive (HDD)? Disk Technology explained

Two types of expanders are used in the SAS topology: fanout and edge.

The server-attached storage market will typically use edge expanders, which can address up to 128 SAS addresses or drives in a segment. When a fanout expander is incorporated into the architecture, up to 128 segments can exist within a SAS domain, which allows SAS to address up to 16,384 SAS physical links.

There can be only one fanout expander per SAS domain, but you can have any combination of edge expanders, initiators, or storage devices.

Solid State Drives

What is a Hard Disk Drive (HDD)? Disk Technology explained

Solid State Drives, or SSDs, are made of NAND Flash memory modules that are connected to the host system through an interface chip that uses regular HDD communication protocols. The two types of Flash memories used today are single-level cell and multi-level cell.

The SLC and MLC Flash memory types are similar in their design. MLC Flash devices cost less and allow for higher storage density. SLC Flash devices provide faster write performance and greater reliability, even at temperatures above the operating range of MLC Flash devices.

Single-level Cell

What is a Hard Disk Drive (HDD)? Disk Technology explained

As the name suggests, SLC Flash stores one bit value per cell, which basically is a voltage level. The bit value is interpreted as a zero or a one.

Because there are only two states, SLC represents only one bit value. Each bit can have a value of “programmed” or “erased.”

Multi-level cell

An MLC cell can represent multiple values. These values can be interpreted as four distinct states: zero-zero, zero-one, one-zero, or one-one.

Comparing SLC and MLC

What is a Hard Disk Drive (HDD)? Disk Technology explained

NAND flash memory using MLC technology has quickly become the predominant Flash technology used in the broader market for consumer products. However, compared to SLC, MLC has the some characteristics that make it less desirable for creating the type of higher performance, high-reliability devices that are required for server storage.

For example, it has higher internal error rates because of the smaller margins separating the cell states, necessitating larger ECC memories to correct them.

It has a significantly shorter life-span in terms of the maximum number of program and erase cycles.

It also has slower read performance and significantly slower write (program) performance than SLC.

MLC NAND Flash has comparatively poor read and write performance. More importantly, the SLC Flash program and erase lifecycle is often referred to as “endurance.” It is 10 to 20 times greater than that of MLC Flash.

The higher storage density of MLC will continue making it the predominant choice for use in lower cost and lower workload consumer devices. The higher performance and better reliability of SLC NANDs are currently preferred to create the Solid State Drives that meet the requirements of server storage.

SSD wear leveling

What is a Hard Disk Drive (HDD)? Disk Technology explained

Wear leveling is one of the basic techniques used to increase the overall endurance of NAND-based Solid State Drives.

Because NAND-based SLC Flash supports only 100,000 lifetime write and erase cycles, it is important that no physical NAND block in the memory array be erased and rewritten more than is necessary. However, certain logical SCSI blocks of an SAS or SATA device might need to be updated, or rewritten, on a frequent basis. Wear leveling resolves this issue by continuously remapping logical SCSI blocks to different physical pages in the NAND array.

Wear leveling ensures that erasures and rewrites remain evenly distributed across the medium, which maximizes the endurance of the SSD. To maximize SSD performance, this logical-to-physical map is maintained as a pointer array in the high-speed DRAM on the SSD controller. It is also maintained algorithmically in the metadata regions in the NAND flash array itself. This ensures that the map can be rebuilt after an unexpected power loss.

SSD over-provisioning

–On high-end SSDs, it is possible to over-provision by 25% above the stated storage capacity

–Distributes the total number of reads and writes across a larger population of NAND blocks and pages over time

–The SSD controller gets additional buffer space for managing page writes and NAND block erases

The overall endurance and performance of an SSD can also be increased by overprovisioning the amount of NAND capacity on the device. On higher end SSDs, NAND can be over-provisioned by as much as 25 percent above the stated storage capacity. Over-provisioning increases the endurance of an SSD by distributing the total number of writes and erases across a larger population of NAND blocks and pages over time. Over-provisioning can also increase SSD performance by giving the SSD controller additional buffer space for managing page writes and NAND block erases.

Smart SSD Wear Gauge

Although wear leveling can increase the performance levels and prolong the life of NAND, you have to remember that NAND has a limited lifetime of 100,000 reads and writes.

HPE has a utility called the SmartSSD Wear Gauge that can be used to collect information and generate reports on the current usage levels and expected remaining life for Solid State Drives. The SmartSSD Wear Gauge is provided as part of the Array Diagnostic Utilities.

What is a Disk Enclosure?

A disk enclosure is basically a chassis designed to hold and power disk drives and to provide a mechanism to enable them to communicate to one or more separate hosts.

What is a Hard Disk Drive (HDD)? Disk Technology explained

–A disk enclosure is a specialized casing designed to hold and power disk drives while providing a mechanism to allow them to communicate to one or more separate computers

–In enterprise terms, “disk enclosure” refers to a larger physical disk chassis

–Disk enclosures do not have RAID controllers –Disk enclosures can be connected directly to the hosts

Fault-tolerant cabling

What is a Hard Disk Drive (HDD)? Disk Technology explained

–Fault-tolerant cabling allows any drive enclosure to fail or be removed while maintaining access to other enclosures –P2000 G3 Modular Storage Array (MSA)

–Two D2700 6Gb enclosures –The I/O module As on the drive enclosures are shaded green

–The I/O module Bs on the drive enclosures are shaded red

The schematic shows a P-2000 G-3 MSA System connected to two D2700 six-gigabit drive enclosures using fault-tolerant cabling.

The I/O module As on the drive enclosures are shaded green. The I/O module Bs on the drive enclosures are shaded red.

Fault-tolerant cabling requires that you connect “P2000 G3 controller A” to “I/O module A” of the first drive enclosure and cascade this connection on to I/O module A of the last drive enclosure (shown in green). Likewise, you must connect “P2000 G3 controller B” to “I/O module B” of the last drive enclosure and cascade this connection on to I/O module B of the first drive enclosure (shown in red).

Straight-through cabling

–Straight-through cabling can sometimes provide increased performance in the array, it also increases the risk of losing access to one or more enclosures in the event of an enclosure failure or removal

What is a Hard Disk Drive (HDD)? Disk Technology explained

–P2000 G3 Modular Storage Array (MSA)

–Two D2700 6Gb enclosures

–The I/O module As on the drive enclosures are shaded green

–The I/O module Bs on the drive enclosures are shaded red

The following figure shows a P2000 G3 MSA System connected to two D2700 six-gigabit drive enclosures using straight-through cabling.

Straight-through cabling requires that you connect P2000 G3 controller A to I/O module A of the first drive enclosure, which in turn is connected to I/O module A of the last drive enclosure (shown in green).

P2000 G3 controller B is connected to I/O module B of the first drive enclosure, which in turn is connected to I/O module B of the last drive enclosure (shown in red).

What is LUN Masking?

Logical unit number masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts. LUN Masking is implemented primarily at the host bus adapter, or HBA, level. LUN masking implemented at this level is vulnerable to any attack that compromises the HBA.

Selective Storage Presentation is a special kind of LUN masking that is available on HPE 3PAR and EVA Storage Arrays. It lets the user designate which hosts have access to which logical drives. SSP has three advantages over standard LUN masking: –First, it is enforced at the level of the storage array. –Second, it is independent of any host vulnerabilities. –And third, it is applied through the dedicated command-line interface or GUI of the storage array.

What is a Hard Disk Drive (HDD)? Disk Technology explained

–Enables host visibility of LUNs within the storage array

–LUN granularity –Independent of zoning

–Can be implemented at the host, fabric, or array level

–Used for data security

–Selective Storage Presentation on HPE 3PAR and EVA Arrays

What is Storage Virtualization?

Virtualization is the pooling of physical disks, or parts of the physical disks, into what appears to be a single storage device that is managed from a central console.

Storage virtualization helps the storage administrator perform backup, archiving, and recovery tasks more easily and in less time by disguising the actual complexity of the SAN. With HPE 3PAR Storage Arrays, virtualization improves the availability, reliability, and performance of the array.

What is a Hard Disk Drive (HDD)? Disk Technology explained

Storage virtualization can be created by using a software application or with hardware, or even with a software hybrid appliance.

What is Fat (thick) or thin provisioning?

Thin provisioning is a technology that optimizes disk capacity and utilization. It also saves the money normally spent to purchase, power, and cool disks for future needs.

Businesses typically over-allocate storage. For example, if an application needs five gigabits of storage today but will require 20 gigabits in the future, the business would buy and provision for 20 gigabits. And for good reason: Reprovisioning storage after a server is up and running can be complex, costly, and time-consuming. It takes time, brings down the application, and introduces the risk of human error.

Fat provisioning means buying and allocating excess storage in anticipation of the growing needs of an application. You end up paying, even more, to spin those extra disks aimlessly, cool them down, and give them square footage in the data center.

Thin provisioning ends the cycle of overbuying by enabling you to purchase capacity for the short-term while provisioning it as if you have far more. That means you can dramatically increase storage efficiency while saving money.

Thin-provisioned disks will be provisioned for full capacity (for example 20 gigabits), but will occupy only the required space on the disk (2 gigabits in this example). As the amount of data grows, thin disks will grow in size and will eventually reach the size of a fat-provisioned volume.

Also Read:

How to install HPE 3par Virtual Service Processor 5.0

How to replace a failed disk in 3par storage?

How to configure B2D storage to Data Protector?