Author: technoworldnow

What is Storage Area Network Host and How does it work? | technoworldnow

The Storage Area Network (SAN) host is the server computer that actually draws the space from the SAN storage system. You can mount a virtual drive to the network host server, assign a drive letter to it and format it as per the operating system of the host system.

Storage Basics and Fundamentals |

The goal of connecting a host to a SAN is to reach the LUN defined on the storage array. The host in SAN infrastructure always plays the role of the initiator.

To be able to communicate with the storage array, the fabric needs to be configured so that the HBA adapters of the host belong to the proper zones. Additionally, if the storage array supports Selective Storage Presentation, the host must be allowed to communicate to the LUN on the storage array.

How does the host communicate on Fibre Channel?

For hosts to communicate within a SAN, they require a Fibre Channel host bus adapter. HBAs support all major operating systems and hardware architectures.

  • To communicate with Fibre Channel infrastructure, the host requires a host bus adapter (HBA)
  • Each HBA port physically connects to the fabric and becomes visible to the SAN
  • Port behavior depends on the HBA driver configuration and type and on the configuration of the fabric port

Converged Network Adapter

A converged network adapter, or CNA combines a traditional HBA that is used in storage networks, a NIC that is used in Ethernet networks, and two protocols: the Fibre Channel over Ethernet protocol and Converged Enhanced Ethernet protocol.

CNA interfaces are designed to provide regular Fibre Channel and NIC interfaces to the hosts, so regular Fibre Channel and NIC drivers are used. Internally, a CNA adapter uses the FCoE engine to handle traffic. This FCoE engine is invisible to the host to which the CNA adapter is connected.

N_Port ID virtualization

What is NPIV?

  • N_Port ID Virtualization (NPIV) is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.
  • NPIV makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual WWN.
  • HPE offers an NPIV-based Fibre Channel interconnect option for server blades called Virtual Connect.

N_Port ID Virtualization is an industry-standard Fibre Channel protocol that provides a means to assign multiple Fibre Channel addresses on the same physical link.

NPIV provides a Fibre Channel facility for assigning multiple N_Port IDs to a single N_Port, thereby allowing multiple distinguishable entities on the same physical port. In other words, it makes a single Fibre Channel port appear as multiple virtual ports, each having its own N_Port ID and virtual World Wide Name (WWN).

The NPIV protocol requires an N_Port, which is typically an HBA or any device that acts as an NPIV gateway, and a fabric, which is usually a Fibre Channel switch, so that the N_Port can request and acquire multiple addresses from the fabric.


NPIV allows a single HBA, called an “N_Port,” to register multiple World Wide Port Names (WWPNs) and N_Port identification numbers.

NPIV allows a single HBA or target port on a storage array to register multiple World Wide Port Names and N_Port IDs. This enables each virtual server to present a different World Wide Name to the SAN, which in turn means that each virtual server will see its own storage, but no storage from other virtual servers.

Server Virtualization with NPIV

NPIV allows multiple virtual operating system instances on the same physical machine to have individual World Wide Port Names. This means they can be treated as discrete entities by the network devices. In other words, the virtual machines can share a single HBA and switch port while receiving individualized network services such as zoning.

The HBA NPIV implementation virtualizes the physical adapter port, so a single physical Fibre Channel adapter port can function as multiple logical ports. In this implementation, each physical port can support up to 256 virtual ports.

NPIV I/O virtualization enables storage administrators to deploy virtual servers with virtual adapter technologies, creating virtual machines that are more secure and easier to manage.

HPE Virtual Connect Fibre Channel

HPE Virtual Connect is a set of interconnect modules and embedded software for HPE BladeSystem c-Class enclosures that simplifies the setup and administration of server connections. HPE offers the Virtual Connect 4-gigabit and 8-gigabit Fibre Channel Modules, two HPE Virtual Connect 1/10-gigabit Ethernet modules, the Virtual Connect Flex-10 10-gigabit Ethernet Module, and for management, HPE Virtual Connect Manager and HPE Virtual Connect Enterprise Manager.

Although Virtual Connect uses the standard HBAs within the server, it uses a new class of NPIV-based Fibre Channel interconnect modules to simplify the connection of those server HBAs to the data center environment.

Virtual Connect also extends the capability of the standard server HBAs by providing support for securely administering their Fibre Channel WWN addresses.

HPE Virtual Connect FlexFabric

  • Up to four physical functions for each server blade adapter network port
  • The physical function corresponds to the HBA
  • Four physical functions share the 10 Gb link
  • One of the four physical functions can be defined as the Fibre Channel HBA, and the remaining three will act
    as NICs
  • Each physical function has 100% hardware-level performance, but the bandwidth might be fine-tuned to quickly adapt to virtual server workload demands

Virtual Connect FlexFabric provides up to four physical functions for each blade-server-adapter network port, with the unique ability to fine-tune the bandwidth to adapt to virtual server workload demands quickly.

The system administrator can define all four connections as FlexNICs to support only Ethernet traffic, like with Virtual Connect.

Additionally, one of the physical functions can also be defined as a FlexHBA for Fibre Channel protocol support or as an iSCSI initiator for iSCSI boot protocol support. Each function has complete hardware-level performance and provides the I/O performance needed to take full advantage of multicore processors and to support more virtual machines per physical server.

What is Boot from SAN?

The process of booting a server using external storage devices over a SAN

  • Used for server and storage consolidation
  • Minimizes server maintenance and reduces backup time
  • Allows for rapid infrastructure changes

The process of loading installed operating system code from a storage device to the computer memory when the computer is powered on is referred to as the “boot process.” Typically, HPE ProLiant servers boot operating systems from internal SCSI, IDE, SATA, and SAS storage devices.

However, when you boot the operating system using external storage devices such as Fibre Channel HBAs and RAID arrays over a SAN instead of server-based internal boot devices, the boot process is referred to as “Boot from SAN.”

Multipath Concept

  • Multipath I/O (MPIO) provides automatic path failover between the server and the disk arrays
  • Some multipath solutions provide load balancing over multiple HBA paths

A redundant SAN design will present your host with multiple paths to the same LUN. Without multipath software, a server would see all of the paths to the LUN defined on the storage array, but it would not understand that the multiple paths lead to a single LUN. That would lead to the situation where the server showed four distinctive LUNs, instead of having single LUN through multiple paths.

A multipath server driver helps the server to sense that multiple paths are leading to the same physical device, and it enables a host to correctly present the LUN as single device.

What is Path Failover?

Failover is handled by MPIO, and it is supported via services, drivers, and agents

It is transparent to the applications

The administrator has to configure the primary and alternate paths

One of the benefits of MPIO is support for automatic path failover. Automatic path failover is initiated when there is a failure of some of the data paths.

Changes in a SAN configuration are detected by the drivers, services, and agents that are part of the MPIO solution. I/O requests that were using the failed path are redirected to the remaining functioning paths.

The whole procedure is transparent to the application running on the affected host, and all events are logged to the system event database.

What is Load Balancing?

MPIO load balancing goes across all installed HBA ports in a server to increase throughput and HBA utilization. You can configure different load balancing policies.

The availability of those options depends on the multipath software and hardware. Generally, four modes are supported: –Round robin, –Least I/O, –Least bandwidth, and –Shortest queue.  

  • MPIO provides load balancing across all installed HBAs (ports) in a server
  • There are various load-balancing policies, depending on the multipath software:
  • Round robin
  • Least I/O
  • Least bandwidth
  • Shortest queue (requests, bytes, service time)

MPIO solutions consist of two components: –Drivers developed by Microsoft and –Device-specific modules developed by storage vendors to Microsoft standards.

MPIO uses redundant physical path components to eliminate single points of failure between servers and storage. It increases data reliability and availability, reduces bottlenecks, and provides fault tolerance and automatic load balancing of I/O traffic.

Although multipathing and clustering both provide high availability, multipathing by itself does not protect against hardware or software failures because it only ensures the redundancy of cabling, adapters, and switches that are native to the Microsoft multipathing software.

Fibre Channel advanced feature

Now we will look at some of the advanced features you might find in Fibre Channel environments.

Each port in the switched fabric has its own unique 24-bit address. With this 24-bit addressing scheme comes a smaller frame header, and this can speed up the routing process. This frame header and routing logic optimizes the Fibre Channel fabric for high-speed switching of frames.

The 24-bit addressing scheme also allows for up to 16 million addresses, which is an address space larger than any practical SAN design in existence today. The 24-bit address has to be connected to the 64-bit address that is associated with World Wide Names.

Fibre Channel name and address

  • 24-bit addresses are automatically assigned by the topology to remove the overhead of manual administration
  • Unlike the WWN addresses, port addresses are not built-in
  • The switch is responsible for assigning and maintaining the port addresses
  • The switch maintains the correlation between the port address and the WWN address of the device
    on that port
  • The Name server is a component of the fabric operating system running on the switch

The 24-bit address scheme also removes the overhead of manually administering addresses because it allows the topology itself to assign addresses. This is not like World Wide Name addressing, in which the addresses are assigned to the manufacturers by the Institute of Electrical and Electronics Engineers standards committee and then built into the device, like naming a child at birth.

If the topology itself assigns the 24-bit addresses, then someone has to be responsible for the addressing scheme from WWN addressing to port addressing.

In the switched fabric environment, the switch itself is responsible for assigning and maintaining the port addresses. When a device with its WWN logs in to the switch on a specific port, the switch assigns the port address to that port, and the switch also maintains the correlation between the port address and the WWN address of the device on that port. This function of the switch is implemented by using a Name server.

The Name server is a component of the fabric operating system, and it runs inside the switch. It is essentially a database in which a fabric-attached device registers its values.

Other benefits of dynamic addressing are that it removes the potential element of human error in address maintenance and it provides more flexibility in additions, moves, and changes in the SAN.

Fiber Channel port address-1

A 24-bit port address consists of three parts:

  • The domain consists of bits from 23 to 16.
  • The area consists of bits from 15 to zero eight.
  • The port, or arbitrated loop physical address known as the AL_PA, consists of bits from zero seven to zero zero.  

Fiber Channel port address -2

What is the significance of each part of the port address?

The domain is the most significant byte of the port address. It is the address of the switch itself. One byte allows for up to 256 possible addresses, but because some of these are reserved (like the one for broadcast), only 239 addresses are actually available.

This means that you can have as much as 239 switches in your SAN environment. If you have multiple interconnected switches in your environment, the domain number allows each switch to have a unique identifier.

The area field provides 256 addresses. This part of the address identifies the individual FL_Ports that are supporting loops, or it can be used as the identifier for a group of F_Ports; for example, a card with more ports on it. This means that each group of ports has a different area number, even if there is only one port in the group.

The final part of the address identifies the attached N_Ports and NL_Ports. It provides for 256 addresses.

To determine the number of available addresses, you can use a simple calculation, where you multiply the numbers of domains, areas, and ports. This means that the total number of addresses available is fifteen million, six hundred sixty-three thousand, one hundred four, which is the product of multiplying 239 domains times 256 area times 256 ports.

Simple Name Server

  • The Name server stores information about all of the devices in the fabric
  • An instance of the Name server runs on every Fibre Channel switch in a SAN
  • A switch service that stores names, addresses, and attributes for up to 15 minutes and provides them as required to other devices in the fabric

When you are connecting a Fibre Channel device to a Fibre Channel switch, that device must register itself with that switch. This registration includes host and storage identifiers such as the device network address and a World Wide Name.

On top of this, communication parameters are also exchanged. The Fibre Channel device registers itself with a Simple Name Server, or SNS, which serves as a database for all Fibre Channel devices attached to the SAN. The Fibre Channel switches perform the SNS function.

10-bit addressing mode

The number of physical ports on the switch is limited to 256 by the number of bits in the area part of the Fibre Channel address. Director switches such as the Brocade DCX and DCX-four support Virtual Fabric, where the number of required ports might easily grow to more than 256.

A ten-bit addressing mode allows for the support of up to one thousand twenty-four F_Ports in a logical switch. This is achieved by borrowing the most significant two bits from the ALPA field of the Fibre Channel address.

Although this schema is flexible in supporting a large number of F_Ports, it also reduces the number of NPIV – Loop devices supported on a port to 64.

Arbitrated loop addressing

Fibre Channel specifies a three-byte field for the address used in routing frames. In an arbitrated loop, only one of the three bytes, containing the least significant eight bits, is used for the arbitrated loop physical address. This address is used in the Source and Destination IDs of the frames transmitted in the loop.

Of the full 24-bit address defined by the Fibre Channel standard, only eight bits are used by the ALPA. Bits eight to 23 are used for the FL_Port identifier, and the full 24 bits are used by an N_Port in a fabric switch environment.

What is Fibre Channel and Fibre Channel over Ethernet and How does it work?

In a Storage Area Network environment most IT companies use Fibre Channel and Fibre Channel over Ethernet architectures for user data access at ease.

Fibre Channel communication can be conducted over copper coax cables, twisted pair cables, or optical fiber. This chapter describes the components used to transform electrical signals to optical signals, and vice versa, and the most common types of optical fibers. It also identifies some of the factors that lead to fiber-optic signal losses.

Table of Contents

Fibre Channel Function levels

Fibre Channel is structured as a set of five hierarchical function levels.

  • FC-0 is the physical level that defines connectors, cables, and the electrical characteristics of transition.
  • FC-1 is the encoding level, which defines the encoding and decoding and the transmission protocol.
  • FC-2 is the signaling and framing protocol level. It determines how the data from the upper level is framed for handling by the transport level, and it incorporates the management of frames, flow control, and cyclic redundancy checks.
  • FC-3 is the common services level, which is open for future implementation.
  • FC-4 is the protocol mapping level. It is usually provided by the device drivers from the different vendors, and it establishes the interface between Fibre Channel and the upper-level protocols.

FC-0—Physical level

Defines the physical link in the Fibre Channel system

  • Transceivers
  • Connection
  • Media type

Available data rates

  • 133 Mbit/s
  • 266 Mbit/s
  • 531 Mbit/s
  • 1062 Mbit/s

The lowest architectural level defines the physical links in the system, including the fiber, connectors, optical, and electrical parameters for a variety of data rates.

The physical level is designed for the use of a large number of technologies to meet the widest range of system requirements. An end-to-end communication route can consist of different link technologies to achieve the maximum performance and price efficiency.

This section takes a closer look at the physical link components.

To be able to transmit data, you need transceivers. The most common way of transmitting data is to use light-based fiber optics. The use of electronic signals is the traditional and slower way of transmitting data.

The best modules to use today are the XFP and SFP transceivers. “XFP” stands for “Ten Gigabit Small Form-Factor Pluggable” and “SFP” stands for “Small Form-Factor Pluggable.”

SFP and SFP+ transceivers have the same size and appearance, but they support different standards. As a result, the less expensive SFP supports data rates up to four-point-two-five gigabits and distances up to one hundred fifty kilometers, and the SFP+ supports data rates up to sixteen gigabits and distances up to eighty kilometers.

Fibre Channel connectors

–SFP, SFP+, and XFP transceivers are compatible with the Lucent Connector (LC) type of connectors

–Cables containing LC connectors on both sides are known as LC-LC cables

An optical fiber connector terminates the end of an optical fiber and enables faster connection and disconnection than splicing. The connectors mechanically couple and align the cores of fibers so light can pass through. Better-quality connectors lose little light because of reflection or misalignment of the fibers. In all, about one hundred fiber optic connectors have been introduced to the market.

SFP, SFP+, and XFP transceivers are compatible with the Lucent Connector types of connectors. Cables containing LC connectors on both sides are known as LC-LC cables.

Fibre Channel cabling

Although it was initially designed for use with fiber-optic cable, Fibre Channel works well at shorter distances compared to copper cable in installations like storage area networks. In fact, the specification lists several different types of copper media that can support Fibre Channel.

The most common form of copper for Fibre Channel is shielded, twisted-pair cabling using DB-9 connectors—what looks like shielded telephone wire.

However, it is important to understand that copper cable for Fibre Channel needs to meet higher performance standards than conventional telephone wire. Properly specified and installed copper cable works fine for shorter distances, such as within a building, at speeds up to one hundred megabytes per second.

Common optical (glass fiber) cable types include:

  • Sixty-two-point-five-micron multimode,
  • Fifty-micron multimode, and
  • Nine-micron single-mode.

Multimode Fiber

  • Multiple streams of light to travel different paths
  • Most popular for networking
  • Fibre Channel uses single wavelength

–Example: 850 nm –

Multimode uses a shortwave laser to emit many different light modes. These reflect off the cable cladding at different angles, which causes dispersion. This dispersion reduces the total distance from which the original signal can be reclaimed.

Multimode fiber has a larger core than single-mode fiber. The larger the core, the greater the dispersion factor, hence the reduction in the distance that data, or light, can travel.

Single-mode Fiber

Single-mode is an optical fiber with a core diameter of less than ten microns. Used for high-speed transmission over long distances, it provides greater bandwidth than multimode fiber, but its smaller core makes it more difficult to couple the light source.

Increasingly, single-mode fiber is being used for shorter distances. When single-mode fiber is used in shorter distances, such as a campus or metropolitan area network, step-index fiber is used. For longer distances and for transmitting multiple channels, such as with WDM, dispersion-shifted fiber is used.

Single-mode step-index fiber

When moderate-distance transmission cannot be accomplished with multimode fiber and inexpensive multimode light sources, single-mode fiber is used. This type of fiber is most commonly used in private network, campus, and building applications.

Single-mode fiber is designed for use at both the one-thousand-three-hundred-ten-nanometer and at the one-thousand-five-hundred-fifty-nanometer wavelength windows.

Because the one-thousand-three-hundred-ten-nanometer lasers and detectors are less expensive than one-thousand-five-hundred-fifty-nanometer devices, most of these short-to-moderate distance applications use the one-thousand-three-hundred-ten-nanometer wavelength.

Single-mode fiber is the least expensive fiber available, and is optimized for the lowest dispersion at one-thousand-three-hundred-ten nanometers. It offers the best combination of cost and performance for most short-to-moderate distance private network, campus, and building applications when distances exceed multimode limits.

The information-carrying capabilities of the single-mode fiber are infinite. Single-mode fiber supports speeds of tens of gigabits per second and can carry many gigabit channels simultaneously. Each channel carries a different wavelength of light without any interference.

Fiber-optic class signal loss — Attenuation


  • The reduction in power of the light signal as it is transmitted
  • Caused by passive media components such as cables, cable splices, and connectors

The correct functioning of an optical data link depends on modulated light reaching the receiver with enough power to be demodulated correctly. “Attenuation” is the reduction in the power of the light signal as it is transmitted.

Attenuation is caused by passive media components such as cables, cable splices, and connectors. Although attenuation is significantly lower for optical fiber than for other media, it still occurs in both multimode and single-mode transmissions.

An efficient optical data link must have enough light available to overcome attenuation.

Fiber-optic class signal loss — Dispersion


  • Spreading of the signal over time
  • Two types of dispersion can affect an optical data link:
  • Chromatic dispersion —Resulting from the different speeds of light rays
  • Modal dispersion—Resulting from the different propagation modes in the fiber

Dispersion is the spreading of the signal over time. Two types of dispersion can affect an optical data link:

  • The first type is “chromatic dispersion,” which refers to the spreading of the signal that results from the different speeds of the light rays.
  • The second type is “modal dispersion,” which refers to the spreading of the signal because of the different propagation modes in the fiber.

For multimode transmission, modal dispersion, rather than chromatic dispersion or attenuation, usually limits the maximum bit rate and link length.

For single-mode transmission, modal dispersion is not a factor; however, at higher bit rates and over longer distances, chromatic dispersion limits the maximum link length.

An efficient optical data link must have enough light to exceed the minimum power that the receiver requires to operate within its specifications.

When chromatic dispersion is at the maximum allowed, its effect can be considered as a power penalty in the power budget.

The optical power budget must allow for the sum of component attenuation, power penalties —including those from dispersion, and a safety margin for unexpected losses.

Cable bending and damage

Bending is one of the primary causes of increases in attenuation in optical fibers. Two types of bending are macro bending and micro bending.

The macro bend has a much larger bend diameter than the fiber diameter. Here, the fiber coating has almost no impact on the optical loss because the light is guided in the core, far from the coating.

The coating cannot protect the glass (core and cladding) from being bent because the bend diameter is much larger than the fiber. 

The situation is the opposite for micro bending. Here the bending is local and the coating can protect the glass from external forces applied on the coating surface.

For this reason, many fibers have a two-layer acrylate coating, where the inner layer is soft and can accommodate for external forces acting on the fiber.

Fibers with a thin and hard coating such as polyimide do not have this protection from local bending and must be handled more carefully to avoid micro bending of the glass.

Fiber Channel-1 coding layer

FC-1 8b/10b encode/decode

  • FC-1 defines the transmission protocol including:
  • Serial encoding and decoding rules
  • Special characters – Error control
  • The information transmitted over a fiber is encoded 8 bits at a time into a 10-bit transmission character

Also used in:

  • PCI Express
  • IEEE 1394b
  • Serial ATA
  • SSA –Gigabit Ethernet
  • Infiniband

FC-1 defines the transmission protocol, including serial encoding and decoding rules, special characters, and error control. The information transmitted over a fiber is encoded eight bits at a time into a ten-bit transmission character.

The primary reason for using a transmission code is to improve the transmission characteristic of information across a fiber. The transmission code must be DC balanced to support the electrical requirements of the receiving units.

FC-2 signaling protocol level

The transport mechanism of fiber channel

  • Framing rules
  • Payload
  • Service classes and controlled mechanisms
  • Management of the data transfer sequence

Building Blocks

  • Ordered sets
  • Frames
  • Sequences
  • Exchanges

The basic building blocks of a Fibre Channel connection are the frames. The frames contain the information to be transmitted (the payload), the addresses of the source and destination ports, and the link control information. Frames are broadly categorized as data frames and link-control frames.

A sequence is formed by a set of one or more related frames transmitted unidirectionally from one N_Port to another. Each frame within a sequence is uniquely numbered with a sequence count. Error recovery, controlled by an upper protocol layer, is usually performed at sequence boundaries.

An exchange is composed of one or more non-concurrent sequences for a single operation. Exchanges can be unidirectional or bidirectional between two N_Ports.

Within a single exchange, only one sequence can be active at any time, but sequences of different exchanges can be concurrently active.

FC-3 common services

  • The FC-3 layer covers functions that can span multiple N-ports
  • FC-3 defines the common services necessary for the higher-level capabilities
  • FC-3 provides features such as:
  • Port striping –RAID –Virtualization
  • Compression
  • Encryption
  • Hunt groups
  • Multicast

The FC-3 level of the Fibre Channel standard is intended to provide the common services required for advanced features such as striping, hunt groups, and multicast.

  • Striping refers to multiplying the bandwidth by using multiple N_Ports in parallel to transmit a single information unit across multiple links.
  • Hunt groups refers to the ability for more than one port to respond to the same alias address. This improves efficiency by decreasing the chance of reaching a busy N_Port.
  • Multicast delivers a single transmission to multiple destination ports. This includes broadcasting to all N_Ports on a fabric and sending to only a subset of the N_Ports on a fabric.

FC-4 ULP mappings

  • Each upper-level protocol supported by the Fibre Channel transport requires a mapping for its Information Units to be presented to the lower levels for transport
  • The FC-4 layer provides these mappings for:
  • SCSI-3
  • IP
  • High-Performance Peripheral Interface (HIPPI)
  • FC-AV—A high-bandwidth video link for video networks, up to 500m
  • FC-VE—Fibre Channel Virtual Interface Architecture
  • FC-AE—Fibre Channel Avionics Environment
  • Ficon, IEEE 802.2 LLC, ATM, Link Encapsulation, SBCCS, IPI
  • A Fibre Channel SAN is almost exclusively concerned with using the SCSI-3 mapping

Each upper-level protocol supported by Fibre Channel transport requires a mapping for its information units to be presented to the lower levels for transport.

A Fibre Channel SAN uses the SCSI-3 mapping almost exclusively.

What is Fibre Channel over Ethernet?

Fibre Channel over Ethernet is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This chapter describes FCoE and explains the benefits of using Converged Network Adapters, which combine the strengths of the Fibre Channel and Ethernet protocols in modern data centers.

What is Fibre Channel over Ethernet?

  • Fibre Channel over Ethernet is a mapping of Fibre Channel over selected full-duplex IEEE 802.3 networks
  • The goal is to provide I/O consolidation over Ethernet, reducing network complexity in the data center
  • Customer benefits of a unified fabric:
  • Fewer NICs, HBAs, and cables
  • Lower capital expenditures and operating expenses

Fibre Channel Over Ethernet transports the SCSI storage data used in Fibre Channel networks. It uses the Fibre Channel Protocol stack instead of the TCP/IP stack, and it uses the Ethernet infrastructure, which has the NICs, cables, switches, and so on. The goal is to provide I/O consolidation over Ethernet, reducing network complexity in the data center.

Customer benefits of using a unified fabric include needing fewer NICs, HBAs, and cables, and lowering the capital expenditures and operating expenses.

Fibre Channel Over Ethernet I/O consolidation

I/O consolidation enables Ethernet and Fibre Channel to share the same physical cable and still maintain protocol isolation. It also enables you to use and configure the same type of hardware for either network.

Although being simple in concept, this configuration is complex. But the benefits from this idea are significant.

  • By leveraging I/O consolidation, that is, by using a combined network interface card and HBA, organizations free up slots, providing a multifunction network and SAN.
  • The reduced number of cards reduces power consumption, which in the case of PCI Express is twenty-five watts per card.
  • There is also a reduced number of switch ports.
  • Less power is consumed in the cooling process, which is the primary barrier to data center expansion and a cause of inefficiency at the present time.

Another advantage of I/O consolidation is that it will give enterprise organizations the means to simplify their cable management. At the moment, twenty gigabits of bandwidth can be provided by two four-gigabit Fibre Channel connections and twelve one-gigabit Ethernet connections.

Fibre Channel and Ethernet can be combined using two ten-Gigabit-Ethernet cables. This maintains the bandwidth but reduces the number of cables being managed by seventy-five percent. This also results in fewer points of management that administrators have to control.

Fibre Channel Over Ethernet mapping

  • Fibre Channel Over Ethernet maps the Fibre Channel commands and data directly into Ethernet frames to create Fibre Channel Over Ethernet
  • Fibre Channel frames are encapsulated in Ethernet frames
  • The mapping is 1:1, meaning there is no segmentation or compression of the Fibre Channel frames

Fibre Channel Over Ethernet maps the Fibre Channel commands and data directly into Ethernet frames. The the mapping is one-to-one, meaning there is no segmentation or compression of the Fibre Channel frames.

But Ethernet is a lossy medium. It provides a single best-effort pipe that drops packets during a network congestion. So in Fibre Channel Over Ethernet , Fibre Channel is encapsulated and run over a lossless Ethernet infrastructure.

Fibre Channel Over Ethernet lossless Ethernet Infrastructure

  • Fibre Channel over Ethernet has to create a lossless Ethernet environment to ensure the reliability of large-scale data transportation
  • Two standards enable lossless Ethernet
  • Data Center Bridging (DCB) –Converged Enhanced Ethernet (CEE)
  • In addition to DCB and CEE, the standard introduces three enhancements to
    the Ethernet to make it lossless:
  • Priority Flow Control (IEEE 802.1Qbb)
  • Congestion Notification (IEEE 802.1Qau)
  • Enhanced Transmission Selection (IEEE 802.1Qaz)

FCoE has to create a lossless Ethernet environment to ensure the reliability of large-scale storage data transportation. The two standards that enable this are Data Center Bridging and Converged Enhanced Ethernet.

A few of the enhancements to make Ethernet lossless are listed in this slide.

Priority Flow Control

Priority Flow Control (IEEE 802.1Qbb)

  • IEEE 802.1Qbb is an enhanced QoS service
  • Traffic is classified in 8 lanes, each of which could be assigned a priority level
  • Priority Flow Control issues a “Pause” command to manage and prioritize traffic when there is congestion
  • The administrators can create lossless (virtual) lanes for FCoE traffic and lossy (virtual) lanes for normal IP traffic

The Institute of Electrical and Electronics Engineers had defined the means to categorize traffic according to its priority in the Quality-of-Service standard IEEE 802.1p.

So the new standard IEEE 802.1bb takes advantage of the earlier standard by first classifying the traffic into eight “lanes,” each of which can be assigned a priority level.

Priority Flow Control issues a Pause command that halts FCoE traffic during congestion so the losses can be minimized. It uses the priority level to distinguish FCoE traffic from other types of traffic. This means that administrators can create lossless virtual lanes for FCoE traffic and lossy virtual lanes for normal IP-based traffic.

Congestion Notification

Congestion Notification (IEEE 802.1Qau)

  • Congestion is measured at the congestion point, but link rate limiting is taken at the point of origin
  • Example: An aggregation switch can ask an edge switch to stop (or limit) its traffic from a particular port, if congestion occurs

Congestion is measured at the congestion point in the network, wherever it is happening, but the action is taken at the reaction point, which is the originating point.

For example, an aggregation switch can ask an edge switch to stop or limit its traffic from a particular port if congestion is encountered.

Enhanced Transmission Selection

Enhanced Transmission Selection (IEEE 802.1Qaz)

  • High-priority traffic such as FCoE is allocated with a minimum guaranteed bandwidth
  • If the FCoE traffic does not fully utilize its reserved capacity, the extra bandwidth can be used by other types of traffic, and this can be controlled dynamically

High-priority traffic like FCoE can be allocated with a minimum guaranteed bandwidth so that all the other traffic on the network does not congest the path with its high volumes.

However, if the FCoE traffic does not fully utilize the path, its “reserved capacity,” then the extra bandwidth can be used by other types of traffic. The protocol can control this dynamically.

Fibre Channel over Ethernet Components

An FCoE configuration includes several components.

In this diagram, the first key component is the Converged Network Adapter. The CNA is a single adapter in the server that attaches to PCI Express slot. It can provide the functionalities of both Ethernet NICs and Fibre Channel HBAs virtually.

That means the server still sees two interfaces, and it sends the IP traffic to the NIC and the Fibre Channel traffic to the HBA. But the CNA collects from both of them and transports the data over a single Ethernet cable, after wrapping all the Fibre Channel frames to Ethernet frames.

The second key component in the diagram is the FCoE link. The FCoE infrastructure uses the same Ethernet infrastructure as the TCP/IP network. It uses UTP copper cables, optical fiber cables, and even the low-cost cables that use the SFP+ interface to carry ten-Gigabit-Ethernet over short distances.

The third component identified in the diagram is the set of FCoE switches and network switches that support the FCoE protocol. Fibre Channel SANs only understand the Fibre Channel protocol and only recognize Fibre Channel interfaces, so there needs to be an intermediary that separates the FCoE traffic from the regular IP traffic and that connects to the Fibre Channel SANs directly.

This intermediate functionality is provided by FCoE switches or network switches with Fibre Channel ports that support the FCoE protocol. The HBAs from the servers connect to the FCoE switch, which in turn connects to the SAN network using Fibre Channel ports and to the IP network using IP ports.

FCoE advantages and limitations

What are the advantages of FCoE?

  • FCoE simplifies the network by reducing the two individual cables from each server and the two network adapters, which are the HBA for storage connectivity and the NIC for computer network connectivity, to just one.
  • FCoE can carry traffic over the Ethernet medium and uses the familiar and easily available copper UTP cables and optical fiber cables.
  • FCoE uses one network adapter instead of two, which results in some power savings for the server.
  • Some I/O virtualization solutions support FCoE, which enables you to reduce the total number of server adapters for a group of servers by consolidating them onto an I/O virtualization appliance and allowing the servers to share the common pool of adapters. The servers themselves connect to the I/O virtualization appliance through interfaces like PCI Express and the appropriate cables from there. You should note that certain proprietary, vendor-based drivers might have to be installed to complete this setup.
  • The performance of an FCoE network is comparable to that of Fibre Channel and IP networks, with FCoE currently supporting the speeds of Ethernet networks up to one, ten, or more gigabits per second. This speed is expanding to forty gigabits per second and one hundred gigabits per second.
  • FCoE can be used in virtualized environments (server virtualization) and is quite advantageous in such circumstances. –FCoE, unlike iSCSI, is a reliable storage transportation protocol. It can scale up to thousands of servers.
  • Because FCoE encapsulates the Fibre Channel data onto Ethernet frames for transportation only, all the existing administration tools and workflows for Fibre Channel remain intact.

Hence, the existing investment in Fibre Channel storage is preserved and the reliability of Fibre Channel is also maintained. The support for FCoE from network switch vendors strengthens the case of FCoE. These vendors are offering converged switches with both Ethernet and Fibre Channel ports.

Some disadvantages and limitations of FCoE include:

  • The only Ethernet component that is currently compatible with FCoE is the cables. Everything else has to change to implement FCoE. This means that the actual savings would only be the amount and cost of cables.
  • The cost of a Unified CAN, although it is coming down, might be more than the cost of the HBA and NIC combined.
  • FCoE is currently restricted to access networks only (server-to-switch connections). The distribution and core storage networks are still in Fibre Channel and will continue to be in Fibre Channel until the FCoE technology matures enough that its own FCoE SAN networks can be created.
  • iSCSI proponents might still argue that changing one disparate network into another does not amount to convergence of the storage and network infrastructures. –Security on FCoE networks might have to be re-evaluated because the network is now running over Ethernet, which is more easily accessed than Fibre Channel.

What is SAN, NAS and DAS? Difference Explained | technoworldnow

In an enterprise storage technology, we mostly have three options to choose from – which are Storage Area Network (SAN), Network Attached Storage (NAS) and Direct Attached Storage (DAS). All these three have their own advantage and disadvantages.

The Top Data Center Operators In India

Options for connecting computers to storage have increased dramatically in a short time. This chapter introduces the major storage networking variations: –Direct attached storage, or DAS, –Network-attached storage, or NAS, and –The storage area network, or SAN.


What is SAN, NAS and DAS? Difference Explained | technoworldnow

Businesses can choose among three storage architectures to suit their requirements. Each architecture has certain advantages and disadvantages:

– DAS is a storage device with a dedicated, parallel connection to a server, typically using SCSI.

– NAS storage devices connect directly to the LAN through an Ethernet port. LAN devices use TCP/IP to communicate with their network peers.

– A SAN is a dedicated network that provides storage to enterprise servers. It is typically configured using switches and Fibre Channel connections.

Direct Attached Storage

– The traditional method of locally attaching storage to servers through a dedicated SCSI communication channel between the server and storage

– Storage for each server is managed separately and cannot be shared

– DAS supports disk drives, a RAID subsystem, or another storage device

DAS is the traditional, non-networked method of locally attaching storage to servers through a dedicated communication channel between the server and storage.

The server typically communicates with the storage subsystem using a block-level interface. The file system resides on the server and determines which data blocks are needed from the storage device to complete the file request from an application.

Network-Attached Storage

– NAS provides a file-level access to storage systems

– NAS devices are: –Server-independent

– Used to off-load storage traffic to a single, dedicated storage device

What is SAN, NAS and DAS? Difference Explained | technoworldnow

NAS servers provide a file-level interface to storage subsystems. Because NAS devices are server-independent, they complement and help ease the burden of overworked file servers by off-loading storage to a single-purpose, dedicated storage device. NAS devices have an operating system that is optimized for file sharing and does not run general server applications, eliminating the major cause of downtime.

NAS devices are perfect for storing unstructured data such as files that are manually created by users.

Fibre Channel Storage Area Network

Dedicated network that provides access to consolidated, block-level data storage

– Special switches are used to connect storage arrays with servers and with each other

– Network communication uses the Fibre Channel protocol, which was specially developed for the transport of files

– This protocol is reliable, with speeds up to 16 Gbit/s

– FC SAN components allow for high levels of redundancy and resiliency

What is SAN, NAS and DAS? Difference Explained | technoworldnow

The need for a separate network for storage purposes only was evident toward the end of the nineties. The new storage area network infrastructure consisted of its own cabling and further development of the SCSI protocol. SCSI was already being used for the connection of devices such as storage arrays or printers to a server. The new development became known as Fibre Channel.

The Fibre Channel protocol was specially developed for the transport of files. It is said to be reliable, and it recently even outperformed Ethernet, with a transport speed of sixteen gigabits per second.

By design, a SAN should provide redundancy and resiliency: –Redundancy is the duplication of components up to and including the entire fabric to prevent a failure of the total SAN solution. –Resiliency is the ability of a fabric topology to withstand failures.

What should you know before – SAN Consideration?

When designing SAN solutions, consider the following:

–Scalability (number of FC ports and expansion capability)

–Storage capacity, efficiency, and cost

–Availability of the fabric, systems, and data


–Remote replication of data

When planning and operating a SAN, you need to consider several factors.

–First, a SAN allows for great scalability, but increasing the size of a solution increases its price and complexity. You should consider any future expansion requirements in terms of the number of ports, connected systems, and arrays.

–Second, storage capacity, efficiency, and cost should be balanced to properly match the solution.

–Third, the availability of the fabric, systems, and data should be considered at an early stage of the SAN design. A SAN is often used to achieve no-single-point-of-failure configurations.

Generally, a SAN outperforms NAS and DAS solutions, but the SAN solution should be carefully balanced for the optimal performance.

A SAN plays a crucial role in keeping a business running by providing protection from unpredictable events such as natural disasters or complete site failures. SANs provide the tools, methods, and means to replicate data from a primary site to a secondary, remote site.

Comparing SAN and NAS

What is SAN, NAS and DAS? Difference Explained |

The major difference between a SAN and NAS is that a SAN is a separate network, away from the company LAN. The SAN is configured to allow servers to communicate with storage arrays, typically using Fibre Channel. NAS requires a dedicated storage device, typically an optimized server with a number of RAID storage drives that are attached directly to the network.

Both options have their strengths and weaknesses, with the primary advantages of a SAN being the major weakness of a NAS solution, and vice versa.

The benefits of SANs include network speed, reliability, centralization, and data protection.

The main strengths of NAS are interoperability, a lower total cost of ownership, and its relative simplicity.

Comparing DAS, NAS and SAN

Note the position of the network in this diagram. In the case of DAS and NAS, the file system resides at the level of the storage. Based on the fact that data is visible in the form of the file system, NAS is good for sharing files between devices and operating systems. File system features make it easy to assign access permissions to the stored files.

In contrast to NAS and DAS, a SAN works at the block level. The file system is created and maintained by the operating system. To the operating system, the storage space that is accessible through the SAN looks like a regular block device such as an internal hard drive or a tape device.

How to choose between SAN, NAS and DAS?

What is SAN, NAS and DAS? Difference Explained |

When helping your customer to decide if they should use SAN, NAS, or DAS, it is important to focus on their specific storage needs and their long-term business goals. One of the key criteria to consider is capacity. This is the amount and “type” of data, either file level or block level, that needs to be stored and shared.

Other criteria to consider are:

–The I/O and throughput requirements for performance,

–The scalability and long-term estimates for data growth,

–The storage availability and reliability, especially for mission-critical applications,

–The data protection needed as well as the backup and recovery requirements,

–The quantity and skill level of the available IT staff and resources, and

–Any budget concerns of the customer.

Tiered storage is essentially the assignment of different categories of data to different types of storage devices. These categories can be based on the levels of protection needed, the performance requirements, the frequency of use, the cost, and other considerations that are unique to the business.

The data in a tiered-storage configuration can be moved from high-cost to low-cost storage media, from slow to fast storage media, or from archive to near-online or online storage media.

What are Storage Area Network (SAN) Components?

The physical components of a storage area network can be grouped in a single rack or data center, or they can be connected over long distances. Servers do not provide SAN connectivity out-of-the-box. To connect to the SAN, a server needs a host bus adapter.

This chapter discusses the basic SAN components and their boot order.

Identifying SAN Components





–Hubs or switches


–SAN software

–Fibre Channel cables


–Storage devices

–Backup devices

Fibre Channel SAN environments enable the development of solutions that provide high performance and high availability, which are the fundamental requirements of a storage network.

Fibre Channel devices effectively combat the bandwidth-related problems that generally occur during bulky operations such as backup and restore operations.

A wide range of hardware and software products comprise a SAN. The hardware components offer different features to provide for a range of SAN sizes, from a small SAN to a high-speed, high-volume data center SAN.

The common SAN components are used in four layers:

–The client layer contains the client systems that are using the storage services.

–The host layer includes the servers with their host bus adapters.

–The fabric layer includes Fibre Channel hubs or switches, routers, SAN software, and Fibre Channel cables.

–And the storage layer includes storage and backup devices.

Host Component (Initiator)

Host components consist of servers and other devices that enable servers to connect to the SAN. Generally, servers do not have Fibre Channel ports. Hardware devices that provide the Fibre Channel port and perform digital-to-optical signal conversion are called host bus adapters. HBAs are available in the form of PCI cards for rack-based servers and mezzanine cards for server blades. HBAs often provide more than one Fibre Channel port for SAN connectivity.

The operating system requires the appropriate drivers to support the HBA. HBA drivers are not universal; each hardware manufacturer provides its own drivers for the operating systems its devices support.

The software component that is used to aggregate throughput, provide load balancing, and enable failover in the case of a communication failure is called multipath software. On a Microsoft Windows platform, that software is Microsoft Multipath I/O, or MPIO for short.


Servers typically do not have Fibre Channel connectivity embedded. To connect servers to the SAN, you must use dedicated hardware called the host bus adapter.

Fibre Channel HBAs are similar to the network interface cards used in LANs and other non-SAN networks. They replace the traditional SCSI cards used to connect SAN devices such as servers and storage.

HBAs can come in the form of a PCI card for rack- or tower-based servers or a mezzanine card for high-density server blades.

Disk Arrays (Target)

Disk arrays are considered to be targets in a SAN. To communicate over the SAN, disk arrays are equipped with dedicated connection points called “ports.” To increase availability and enhance performance, disk arrays come with a minimum of four Fibre Channel ports.

Disk arrays are designed and built to run for long periods that is measured as the “uptime.” The most advanced disk arrays can achieve up to five-nines of uptime during the year, which translates to a little more than five minutes of downtime.

Disk arrays are usually connected to an uninterruptable power supply to protect the system from power outages. But even if the UPS fails, disk arrays are usually equipped with a dedicated battery that preserves the cache content when a power outage occurs. When electricity becomes available again and the disks start spinning, the controllers flush the cached data to the hard drives, preserving the data integrity.

To ease the management and administration of a large number of drives, storage array virtual drive images can be “frozen in time” as snapshots, or seamless copies of those virtual images can be made through cloning. Modern disk arrays can work with hundreds of these snapshots and clones without a performance penalty.

Although disk arrays provide high levels of data availability within a rack, they cannot protect that data from extreme events such as natural disasters or complete site failures.

Other technologies are available to replicate the data to remote locations under those conditions. Disk arrays are designed to facilitate seamless and reliable replication of data over long distances to provide data integrity and disaster recovery.

Interconnect Devices

A Fibre Channel switch is a network switch that is compatible with the Fibre Channel protocol. These switches can be combined to create a fabric that allows many-to-many communication while maintaining throughput and providing redundancy with minimal latency.

Two types of Fibre Channel switches are available:

–Fabric switches are predominantly used to implement the switched fabric topology.

–Directors are the most expensive types of switches, but they offer the best performance and maximum reliability. The average annual downtime for a director is barely five minutes.  

What is SAN Boot Order?

To properly boot SAN components, apply the following boot order:

–First, power on the SAN fabric and wait for the switches to finish booting. If you do not wait for the boot process to finish, the fabric login attempts might be denied.

–Second, power on the storage array and wait for the disk array ports to log in to the fabric.

–Third, boot the host systems, and verify that your target drives are visible.

To shut down a SAN configuration, complete these steps in the opposite order.

Also Learn:

What is a Hard Disk Drive (HDD)? Disk Technology

How to Add iSCSI Storage to Datastore in Vmware ESXi 5.5

How to install HPE 3par Virtual Service Processor 5.0

How to backup NDMP Filer (NetApp Storage) in Backup Exec 20

What is DiskPool in NetBackup?

What is multipathing (SAN multipathing)?

How SANs increase availability and utilization?

What is a Hard Disk Drive (HDD)? Disk Technology explained

In a Storage Area Network (SAN), there are multiple components used to form a Storage Area Network and Storage array is one of the most important parts of it.

What is a Hard Disk Drive (HDD)? Disk Technology explained

A hard drive, also known as a hard disk drive or HDD, is a fundamental part of modern computers. Functioning as an internal storage device, the HDD enables a computer to house and execute important files and programs.

At HPE, the hard drive family is divided to three categories: Entry, Midline, and Enterprise. These categories meet the needs of different environments for performance, reliability, and a healthy ratio of cost and capacity.

What is a Hard Disk Drive (HDD)? Disk Technology explained

Entry drives have the lowest unit cost and give you a basic level of reliability and performance. They are best suited for non-mission-critical environments where I/O workloads are forty percent or less. They are also used for internal and archival storage or as boot drives for entry-level servers. Entry drives are only available with a Serial ATA, or SATA interface.

Midline drives give you larger capacity and greater reliability than Entry drives. Midline drives are more resistant to rotational and operational vibration, so they are better suited for use in multiple-drive configurations. Midline drives are available with both SATA and Serial Attached SCSI interfaces. Serial Attached SCSI is typically shortened to “SAS”.

Enterprise drives give you maximum reliability, the highest performance, scalability, and error management under the most demanding conditions. They are the only HPE drives designed for use in unconstrained I/O workloads. They are for mission-critical applications such as large databases, email servers, and back-office.

There are multiple types of Hdd and Disk technologies available, below are the widely used disk technologies:

Characteristics of drives

Form factor

– Small form factor (SFF) —2.5-inch

– Large form factor (LFF)—3.5-inch

Drive capacity

– Depends on number of platters the drive contains, the surface area of each platter, and the areal density
(the number of bits that can be stored per unit area)

– Expressed in gigabytes

Disk drive performance

– Depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read/write bandwidth, the queuing strategies, and the interface technologies


– Measured in terms of Annual Failure Rates (AFRs)

The basic characteristics of industry-standard drives are form factor, drive capacity, performance, and reliability.

– Regarding the form factor, HPE drives for servers are available in a two-point-five-inch small form factor and a three-point-five-inch large form factor. In general, SFF drives give you greater power and space savings. These drives can require as little as half the power and generate significantly less heat than LFF drives. LFF drives are better suited for implementations that require large, single-drive capacities and a lower cost per gigabyte.

– Drive capacity depends on the number of platters the drive contains, the surface area of each platter, and the areal density (the number of bits that can be stored per unit area).

– Disk drive performance depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read-write bandwidth, the queuing strategies, and the interface technologies.

– Drive reliability is measured in terms of Annual Failure Rates. The AFR is the percentage of drive failures occurring in a large population of drives operating for one year. With an AFR of one-point-five percent, one hundred thousand drives would experience approximately fifteen-hundred failures per year.

–Keep in mind that an AFR calculated from a small number of drives would be subject to large statistical variations that make it less reliable than an AFR from a larger sample.

Drive interconnect technologies

The technology to connect one or more drives to a computer system has transitioned from parallel bus data interfaces to serial interfaces

Parallel interfaces:

–ATA—Advanced Technology Attachment

–IDE—Integrated Drive Electronics, also called PATA, Parallel Advanced Technology Attachment

–SCSI—Small Computer System Interface

Serial interfaces:

–SATA—Serial ATA

–SAS—Serial Attached SCSI

The technology used to connect one or more drives to a computer system has transitioned from parallel bus data interfaces such as Advanced Technology Attachment, Integrated Drive Electronics, and the original SCSI interface to the SATA and SAS serial interfaces.

Each drive with a SATA or SAS interface has its own high-speed serial communication channel to the controller.

Parallel SCSI

–A SCSI standard established by ANSI in 1986, but still evolving

–The Common Command Set (CCS) was developed in parallel with the ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4 standards

–The SCSI-1 standard was too permissive and allowed too many vendor-specific options

–The result was incompatibility between products from different vendors, which made for confusion on: –Speed and feed: Fast, Ultra, Ultra2, narrow, and wide –Command sets: Common Command Set, Enhanced Command Set –Termination: Passive, Active, Forced Perfect Termination

–Ultra320 and Ultra640 (AKA Fast-320) are the last offerings

In addition to a physical interconnection standard, the Small Computer System Interface, or SCSI, defines a logical command set standard that all drive devices must adhere to. The Common Command Set was developed in parallel with ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4, which include the revised CCS as part of the standard. The commands depend on the type of device being used.

SCSI-1 initially defined command sets for six device types; however, the standard was too permissive and allowed too many vendor-specific options. The result was incompatibility between products from different vendors.

A CCS was defined to solve the SCSI compatibility issues. It was a subset of the standard and did not allow for exceptions. With the CCS, SCSI-1 began to penetrate the server disk subsystem and tape backup market in the late 1980s.

SCSI-2 targeted the drawbacks of SCSI-1 and introduced support for 10 device types. SCSI-2 also introduced more efficient command sets that improved functionality by including disconnect and command queuing options.

Serial ATA (SATA)

–Hot-plug and Native Command Queuing (NCQ) support

–Transfer rates up to 300 MB/s for SATA2 and 600 MB/s for SATA3, using half-duplex

–SATA3.1 introduced support for Solid State Disks (SSD) and the Zero-Power Optical Disk Drive

–SATA3.2 combines SATA commands with the PCI Express interface to achieve device speeds
up to 16 Gb/s

–Mean Time Before Failure (MTBF) is 1.2 million hours

The Serial ATA, or SATA, standard is a direct replacement for the older Advanced Technology Attachment standard. Compared to ATA, the SATA interface offers a reduced cable size, with only seven conductors rather than the 40 conductors required by the ATA standard, as well as hot-pluggable drives and faster and more efficient data transfer rates through the optional I/O queuing protocol called Native Command Queuing, or NCQ.

The SATA3.1 standard introduced support for Solid State Drives and a Zero-Power Optical Disk Drive. The Zero-Power Optical Disk Drive reduces the power consumption of SATA optical disk drives to zero when the device is idle, preserving the energy.

To further increase the transfer speeds, SATA3.2 combines the SATA commands and the PCI Express interface to boost the maximum theoretical data speed to 16 gigabits per second, compared to the 6 Gb/s that is available on current drives.

Serial Attached SCSI

–SAS uses the full-duplex architecture, effectively doubling the transfer speeds

–The current SAS standard provides speed of 12 Gb/s, with a maximum theoretical speed of 16 Gb/s

–The maximum number of attached devices is 128 (compared to 16 for Parallel SCSI)

–A single SAS domain can address up to 65,535 devices using a fanout expander –The MTBF is increased to 1.6 million hours

Serial ATA uses a half-duplex, serial connection to devices rather than the original parallel connection of ATA. SATA still uses the ATA command set, which is simpler but provides less robust functionality than the SCSI interface used with SAS.

The SATA interface has gone through three major generations. –The one-point-five gigabits per second version was targeted at replacing ATA in the desktop and consumer markets. –The one-point-five gigabits per second version with extensions was targeted for workstations and low-end servers.

This generation added native command queuing. –The three gigabits per second version was targeted for workstations and low-end servers. This generation increased the data transfer rate.

SATA is the best solution for price-sensitive, low-I/O-workload applications, and it dominates the desktop market because of its low cost and the lighter workloads of desktops.

In contrast, Serial Attached SCSI uses a point-to-point, full-duplex serial connection and the SCSI command set, which has more performance and reliability features than the ATA command set. For example, SAS devices can be dual-port.

This enables the device to access the full bandwidth of a SAS link. These additional features come at a cost, however. SAS devices are more expensive than SATA devices for the equivalent storage capacity.

The first-generation SAS supported a link speed of three gigabits per second. The current generation supports a link speed of up to six gigabits per second, or six hundred megabytes per second, in each direction.

SAS is the best solution for mission-critical, high-I/O-workload applications.

Near-line SAS

A SATA drive using a SAS interface is called near-line SAS. It provides all of the enterprise features that come with SAS, but still has the limitations of SATA for disk performance and the mean time before failures.

What is Native Command Queuing (NCQ)?

–NCQ is a technology designed to increase the performance of SATA drives.

–Disks are enabled to internally optimize the order in which read/write commands are executed.

–NCQ is reducing the amount of unnecessary HDD head movement. NCQ is supported on the HPE Smart Array P400, P400i, E500, and P800 disk controllers.

NCQ is a technology designed to increase the performance of SATA hard disk drives by allowing an individual hard disk to internally optimize the order in which received read and write commands are executed. Without NCQ, a drive has to process and complete one command at a time. NCQ increases performance for workloads where multiple simultaneous read and write requests are outstanding, by reducing the amount of unnecessary back-and-forth on the drive heads. This most often occurs in server and storage applications.

For NCQ to be enabled, it must be supported and turned on in the controller, and in the hard drive itself.

NCQ performance gains

What is a Hard Disk Drive (HDD)? Disk Technology explained

What is a Hard Disk Drive (HDD)? Disk Technology explained

NCQ provides 8.8 percent faster performance in generic PC HDD read throughput and 9 percent faster performance in generic PC applications over non-NCQ systems.

What are SAS Domains?

What is a Hard Disk Drive (HDD)? Disk Technology explained

Two types of expanders are used in the SAS topology: fanout and edge.

The server-attached storage market will typically use edge expanders, which can address up to 128 SAS addresses or drives in a segment. When a fanout expander is incorporated into the architecture, up to 128 segments can exist within a SAS domain, which allows SAS to address up to 16,384 SAS physical links.

There can be only one fanout expander per SAS domain, but you can have any combination of edge expanders, initiators, or storage devices.

Solid State Drives

What is a Hard Disk Drive (HDD)? Disk Technology explained

Solid State Drives, or SSDs, are made of NAND Flash memory modules that are connected to the host system through an interface chip that uses regular HDD communication protocols. The two types of Flash memories used today are single-level cell and multi-level cell.

The SLC and MLC Flash memory types are similar in their design. MLC Flash devices cost less and allow for higher storage density. SLC Flash devices provide faster write performance and greater reliability, even at temperatures above the operating range of MLC Flash devices.

Single-level Cell

What is a Hard Disk Drive (HDD)? Disk Technology explained

As the name suggests, SLC Flash stores one bit value per cell, which basically is a voltage level. The bit value is interpreted as a zero or a one.

Because there are only two states, SLC represents only one bit value. Each bit can have a value of “programmed” or “erased.”

Multi-level cell

An MLC cell can represent multiple values. These values can be interpreted as four distinct states: zero-zero, zero-one, one-zero, or one-one.

Comparing SLC and MLC

What is a Hard Disk Drive (HDD)? Disk Technology explained

NAND flash memory using MLC technology has quickly become the predominant Flash technology used in the broader market for consumer products. However, compared to SLC, MLC has the some characteristics that make it less desirable for creating the type of higher performance, high-reliability devices that are required for server storage.

For example, it has higher internal error rates because of the smaller margins separating the cell states, necessitating larger ECC memories to correct them.

It has a significantly shorter life-span in terms of the maximum number of program and erase cycles.

It also has slower read performance and significantly slower write (program) performance than SLC.

MLC NAND Flash has comparatively poor read and write performance. More importantly, the SLC Flash program and erase lifecycle is often referred to as “endurance.” It is 10 to 20 times greater than that of MLC Flash.

The higher storage density of MLC will continue making it the predominant choice for use in lower cost and lower workload consumer devices. The higher performance and better reliability of SLC NANDs are currently preferred to create the Solid State Drives that meet the requirements of server storage.

SSD wear leveling

What is a Hard Disk Drive (HDD)? Disk Technology explained

Wear leveling is one of the basic techniques used to increase the overall endurance of NAND-based Solid State Drives.

Because NAND-based SLC Flash supports only 100,000 lifetime write and erase cycles, it is important that no physical NAND block in the memory array be erased and rewritten more than is necessary. However, certain logical SCSI blocks of an SAS or SATA device might need to be updated, or rewritten, on a frequent basis. Wear leveling resolves this issue by continuously remapping logical SCSI blocks to different physical pages in the NAND array.

Wear leveling ensures that erasures and rewrites remain evenly distributed across the medium, which maximizes the endurance of the SSD. To maximize SSD performance, this logical-to-physical map is maintained as a pointer array in the high-speed DRAM on the SSD controller. It is also maintained algorithmically in the metadata regions in the NAND flash array itself. This ensures that the map can be rebuilt after an unexpected power loss.

SSD over-provisioning

–On high-end SSDs, it is possible to over-provision by 25% above the stated storage capacity

–Distributes the total number of reads and writes across a larger population of NAND blocks and pages over time

–The SSD controller gets additional buffer space for managing page writes and NAND block erases

The overall endurance and performance of an SSD can also be increased by overprovisioning the amount of NAND capacity on the device. On higher end SSDs, NAND can be over-provisioned by as much as 25 percent above the stated storage capacity. Over-provisioning increases the endurance of an SSD by distributing the total number of writes and erases across a larger population of NAND blocks and pages over time. Over-provisioning can also increase SSD performance by giving the SSD controller additional buffer space for managing page writes and NAND block erases.

Smart SSD Wear Gauge

Although wear leveling can increase the performance levels and prolong the life of NAND, you have to remember that NAND has a limited lifetime of 100,000 reads and writes.

HPE has a utility called the SmartSSD Wear Gauge that can be used to collect information and generate reports on the current usage levels and expected remaining life for Solid State Drives. The SmartSSD Wear Gauge is provided as part of the Array Diagnostic Utilities.

What is a Disk Enclosure?

A disk enclosure is basically a chassis designed to hold and power disk drives and to provide a mechanism to enable them to communicate to one or more separate hosts.

What is a Hard Disk Drive (HDD)? Disk Technology explained

–A disk enclosure is a specialized casing designed to hold and power disk drives while providing a mechanism to allow them to communicate to one or more separate computers

–In enterprise terms, “disk enclosure” refers to a larger physical disk chassis

–Disk enclosures do not have RAID controllers –Disk enclosures can be connected directly to the hosts

Fault-tolerant cabling

What is a Hard Disk Drive (HDD)? Disk Technology explained

–Fault-tolerant cabling allows any drive enclosure to fail or be removed while maintaining access to other enclosures –P2000 G3 Modular Storage Array (MSA)

–Two D2700 6Gb enclosures –The I/O module As on the drive enclosures are shaded green

–The I/O module Bs on the drive enclosures are shaded red

The schematic shows a P-2000 G-3 MSA System connected to two D2700 six-gigabit drive enclosures using fault-tolerant cabling.

The I/O module As on the drive enclosures are shaded green. The I/O module Bs on the drive enclosures are shaded red.

Fault-tolerant cabling requires that you connect “P2000 G3 controller A” to “I/O module A” of the first drive enclosure and cascade this connection on to I/O module A of the last drive enclosure (shown in green). Likewise, you must connect “P2000 G3 controller B” to “I/O module B” of the last drive enclosure and cascade this connection on to I/O module B of the first drive enclosure (shown in red).

Straight-through cabling

–Straight-through cabling can sometimes provide increased performance in the array, it also increases the risk of losing access to one or more enclosures in the event of an enclosure failure or removal

What is a Hard Disk Drive (HDD)? Disk Technology explained

–P2000 G3 Modular Storage Array (MSA)

–Two D2700 6Gb enclosures

–The I/O module As on the drive enclosures are shaded green

–The I/O module Bs on the drive enclosures are shaded red

The following figure shows a P2000 G3 MSA System connected to two D2700 six-gigabit drive enclosures using straight-through cabling.

Straight-through cabling requires that you connect P2000 G3 controller A to I/O module A of the first drive enclosure, which in turn is connected to I/O module A of the last drive enclosure (shown in green).

P2000 G3 controller B is connected to I/O module B of the first drive enclosure, which in turn is connected to I/O module B of the last drive enclosure (shown in red).

What is LUN Masking?

Logical unit number masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts. LUN Masking is implemented primarily at the host bus adapter, or HBA, level. LUN masking implemented at this level is vulnerable to any attack that compromises the HBA.

Selective Storage Presentation is a special kind of LUN masking that is available on HPE 3PAR and EVA Storage Arrays. It lets the user designate which hosts have access to which logical drives. SSP has three advantages over standard LUN masking: –First, it is enforced at the level of the storage array. –Second, it is independent of any host vulnerabilities. –And third, it is applied through the dedicated command-line interface or GUI of the storage array.

What is a Hard Disk Drive (HDD)? Disk Technology explained

–Enables host visibility of LUNs within the storage array

–LUN granularity –Independent of zoning

–Can be implemented at the host, fabric, or array level

–Used for data security

–Selective Storage Presentation on HPE 3PAR and EVA Arrays

What is Storage Virtualization?

Virtualization is the pooling of physical disks, or parts of the physical disks, into what appears to be a single storage device that is managed from a central console.

Storage virtualization helps the storage administrator perform backup, archiving, and recovery tasks more easily and in less time by disguising the actual complexity of the SAN. With HPE 3PAR Storage Arrays, virtualization improves the availability, reliability, and performance of the array.

What is a Hard Disk Drive (HDD)? Disk Technology explained

Storage virtualization can be created by using a software application or with hardware, or even with a software hybrid appliance.

What is Fat (thick) or thin provisioning?

Thin provisioning is a technology that optimizes disk capacity and utilization. It also saves the money normally spent to purchase, power, and cool disks for future needs.

Businesses typically over-allocate storage. For example, if an application needs five gigabits of storage today but will require 20 gigabits in the future, the business would buy and provision for 20 gigabits. And for good reason: Reprovisioning storage after a server is up and running can be complex, costly, and time-consuming. It takes time, brings down the application, and introduces the risk of human error.

Fat provisioning means buying and allocating excess storage in anticipation of the growing needs of an application. You end up paying, even more, to spin those extra disks aimlessly, cool them down, and give them square footage in the data center.

Thin provisioning ends the cycle of overbuying by enabling you to purchase capacity for the short-term while provisioning it as if you have far more. That means you can dramatically increase storage efficiency while saving money.

Thin-provisioned disks will be provisioned for full capacity (for example 20 gigabits), but will occupy only the required space on the disk (2 gigabits in this example). As the amount of data grows, thin disks will grow in size and will eventually reach the size of a fat-provisioned volume.

Also Read:

How to install HPE 3par Virtual Service Processor 5.0

How to replace a failed disk in 3par storage?

How to configure B2D storage to Data Protector?

How to install HPE 3par Virtual Service Processor 5.0

Hewlett Packard Enterprise 3par Service Processor is a monitoring and support device which keepts a track of the 3par storage system and detects any hardware or software failure to report to HPE remote support.

HPE 3par Service Processor is an appliance and available in physical and virtual versions for Service Processor 5.0 release. The physical Service Processor comes with pre-loaded software from the factory itself so users do not need to install any application. You just need to add your hostname, IP address and other network details and it is available to monitor your 3par storage system.

The virtual Service Processor software is avaialble as a free download from HPE software center and provided in an Open Virtual Format (OVF) for VMware vSphere Hypervisor and it is self-extractable Virtual Hard Disk (VHD) package for Microsoft Hyper-V as well.

The Virtual Service Processor is supported on Windows Server 2012/2012 R2/2016) and the VMware vSphere hypervisor (VMwareESXi 5.5/6.0/6.5/6.7). The Virtual Service Processor needs no physical connections. It runs on a customer defined network environment. For more detailed information please check below link:

Available Virtual Service Processor versions:

  • 3PAR_VIRTUAL_SP_5.0.2.1
  • 3PARVIRTUAL_SP_5.0.2_MU2
  • 3PARVIRT_SP_5.0.3_MU3
  • 3PAR_VIRT_SP_5.0.4
  • 3PAR_SP_5.0.4.1
  • 3PAR_VIRT_SP_5.0.5
  • 3PAR_SP_5.0.5.1
  • 3PAR_VIRT_SP_5.0.6
  • 3PAR_SP_5.0.6.1
  • 3PAR_VIRT_SP_5.0.7
  • 3PAR_SP_5.0.7.1
  • 3PAR_VIRT_SP_5.0.8
  • 3PAR_SP_5.0.8.1
  • 3PAR_VIRT_SP_5.0.9

The Virtual Service Processor can be downloaded from below link:

Lets check below the virtual Service Processor installation and configuration process:

Once you have downloaded the virtual Service Processor ISO image then you can extract it get the OVF file for installation.

If you you are planning to install it in Vmware Exi environment then you can simply import the OVF to deploy the virtual Service Processor.

You can also install it using Vmware workstatation in case you can want to build a virtual lab for youself then open the vmware workstation and double click on the OVF file and import. It automatically mount the virtual machine and deploys it as well.

It will import the virtual machine and would be ready to power on for further installation:

When you power on the virtual machine first time it goes through various installation processes in dark window display as linux interface since it also a cut down version of linux operating system based on debian.

You might see some process failure during the installation of Virtual Service Processor but don’t worry at the end all going to be fine and it would be installed successfully.

During the installation process it will ask you to enter root password for maintenace purpose however it is not necessary to give the password here and it can be ignored:

Post the installation process is complete it would ask you to log in to configure the virtual Service Processor and initialize it as well. Here you need to use the log in username – admin and there password is not required.

Default username and passowrd for 3par Service Processor are :

User – Password

3paradm 3pardata

admin 3parInServ

3parcust 3parInServ

Just enter the username as admin and it would take you to the next screen:

The next screen will ask you to configure below:

Once you configure your network it will display the final screen to initialize the virtual Service Processor further and it can be done via Service Processor web console using the similar link displayed.

You can type the url in your internet browser and it would appear as below and you can click to proceed further.

Now the web console will look like below screen which is the first screen that appears while browing link in previous option. You can need to click accpet to proceed further.

The next screen is to set up the Service Processor. Click continiue and complete the further setup and configuration.

Next screen needs you to add the required details as shown:

Now that you have entered all the required information the next window will appear like below and it does all setup.

Now the installation and setup is complete and it would ask you to reboot the Service Processor:

Post the first reboot after setup the Service Processor log in screen will display. You can use the admin password which you had set up while configuring it.

Now here comes the main window where you can actually add the 3par storage system to monitor it. Please choose the second option “Add an initialized StoreServ” since 3par storage should already be ready to be added to the service processor.

Now add the 3par controller (Node) IP address, username and password and click Connect:

Now the 3par gets added to the Service Processor and the window will appear like below:

Now you can manage the 3par storage system and and collect many logs to analyze the 3par performance as well. No other action is required and 3par is connected to the Service Processor. You will get same options and web console in physical Service Processor as well. You can refer to the 3par service processor user guide for more information.

Also check:

How to install 3par Virtual Service Processor 4.4

How to replace a failed disk in 3par storage?


How to Convert an Old Air Condition to a Smart AC | technoworldnow

Air conditioners are used often for personal use or for community purposes. Many people who own a ductless Ac and heating unit at the back of their houses are really very irritated by the conditioner set up they are having in their living. It’s a constant source of disappointment for them.

These Ac don’t have the basic feature related to time and temperature setting or schedule. It’s very hard to think of an Ac without a thermostat and that only works when you use it manually. It is risible, especially in modern times. Although the ductless unit helps with air conditioner, and heating, it will be so irritating for people that they can’t control it. To get rid of this drawback people are switching to Smart Ac.

Smart AC Controller

A fast Google search will clear to you that there are many smart air conditioner controllers available in the market. These superb devices add key missing features to ductless and window AC units.

This is a chunk of a hit-and-miss method as some controllers work, some others don’t work 100% of the time and there are some which rest at least once every few weeks. Make sure that you are fusing with your smart home setup since you have to select from a wide range of automation devices. Here are the two best.

Secsibo Air/Sky – Smart Air Conditioner Controller

According to HuffPost, Sensibo Air – Smart Air Conditioner Controller can haul your ductless AC or Your window-mounted AC into the modern age. As long as it is a matter of your AC, you can control everything with your smartphone. Besides that, it is quick to install and set up which includes some automation. This smart AC controller is unanimous with Home Kit and other smart home platforms. This controller app has some smart rules you set based on things like temperature, humidity, and even your geopositioning, plus you can configure time-based schedules as well.

But in case, if you want those smart features but don’t use a smart home system, there are two types are two versions of the controller that are available. The first one and another one with their own motion detectors.

Cielo Breez/Eco

Smartcave claims that the Plus takes the Eco’s smart AC control to the next level and functions more like a thermostat. It offers physical buttons that turn your cooling system on and off and turn your temperature up and down.

There is a physical display on the controller that offers you a readout of current temperature, set temperature, and humidity levels in the room. These units use a smartphone app that is not famous for its interface design, but it’s easy to use. It is easy to add new devices and control them at the same time.

You can compare your ac usage periodically and track trends, at the same time make sure everything is working correctly. You can analyze the log history of every command your controllers send to your cooling system.

Smart Plugs

Plugging your AC and microwave into smart plugs can help you get control over all these appliances even when you are away from home. You can use or connect smaller appliances like lights and coffee makers to the smart plug and control them via an application as well.

You have to make sure the one you’re buying is unanimous with your ecosystem and has the function you need to get the job done. Every smart plug has a power limit. For devices and smaller appliances, you need a plug that has a limit of 10 to 12 amps, but the bigger gadgets need a plug of amps or more.

As per famous mechanics, there are some that connect to Alexa, Google Assistant, or Apple Home Kit. Some of the available smart plugs are made by Amazon, Kasa, Gosund, Wemo, etc.  

Smart Thermostats

The best smart thermostat helps you to control your heating and air conditioning even when you’re outside. By connecting your heating and air conditioning system to the internet, you can use your smartphone or a voice assistant to maintain the temperature in your home, putting an end to wasting energy and money by heating an empty home because you’re late. 


Whare are the Google Drive features that make your life easier?

As it is stated that today’s world is Google’s world where we are living. It’s an over-exaggerated statement, it is possible that you might be reading this article in Google chrome only on your Android phone. Or you might be using any other device, but the browser must be Google chrome or you are using Google services.

How to use Google Drive like a pro |

You might be using Google Maps for finding locations, using YouTube, for watching videos from your Google services only. So, it might be possible that you are using Google Drive for your file heavy transfer. So we are here to tell you more about Google Drive.

Offline access for Google Docs, Sheets, and Slides

Google Drive lets you ingress your documents, sheets, and slides when you are offline and it syncs all the changes once you are back online. The function comes in useful when you want to continue working during your commute or at times when you are not connected to the internet.

For this, you want Google Chrome browser and Google Docs Offline extension for Chrome. You have to sign in to Google Drive and go to settings and check the box that is Create, open and edit your Google Docs, Sheets, and Slides files on this device while offline.

Send files and folders on Gmail or other email clients via Google Drive

Google Drive allows you to share links for files and folders uploaded on it. It eases the user’s work as users get relief from the 25MB attachment procedure which is too hectic a job. You have to first upload folders, files to Google Drive, right-click on it and click on the share button. Copy the link, and paste it into the email draft and send it.

Google Drive without downloading on your device.

In the world of consistent connectivity and synced-up content beyond the devices, this Google Drive feature comes in really handy. This feature not only helps us to ingress the saved links, but also helps to save space or storage for your smartphones or laptops. As the name only indicates the feature that permits users to directly save an image or link from a website to Google Drive without saving data locally on any device.

For this, you have to download and save the Google Chrome extension. Right-click on any image or link and select the option Save to Google Drive. You have to just make sure that the extension which is from ‘’ and not some third-party alternatives.

Convert documents

Another very important feature of Google Drive is that it allows users to convert Microsoft Word documents into Google Docs format, PDF, or vice versa.

Search filters

Google is a huge search engine and any Google service is incomplete if it does not give a broad search feature. Google Drive allows you to add filters to searches. For instance, you can search for a particular word file type, owner, date modified, and more.

​One-tap phone backups

Google Drive can also take a complete backup of your device data, whether it may be a laptop, phone. You just have to go to Google Drive, tap on the three horizontal bars at the top left, tap on the setting, and choose Backup. To take a complete backup, click on the Backup Now button.    

Backup for pc and Mac

As One Drive comes as a default backup and sync service on Windows and iCloud on Mac devices, Google Drive also has a client for Windows and Mac that permits users to sync all their important folders and files on Google Drive. All you need to do is to set up Google drive by downloading the client and enable the sync settings.

Translate a document

Google Translate is an indigenous feature of Google Drive. To use this, open Google Docs >Tools and select Google Translate Document. Select the language you want to translate the document into and Google Translate Document. Select the language you want to translate the document into and Google translate will do it.

Drag and Drop to upload files and folders in Google Drive

If you are worried about how to transfer or upload files or folders to Google Drive? Well, you can do this very easily, you just have to drag and drop files to Drive it will upload files to your storage.

​Google Drive can do research for you on a particular topic

Google Drive is so smart that it can analyze the document you are working on currently and also can recommend more good content, images, charts, and graphs from the web.

To ingress this feature, click on the Tools option and choose Explore. A new sidebar will be visible with all the necessary recommendations. You can also search for stuff, whatever you require from the sidebar.  

Capture a screenshot of the entire website

You can use Save to Google Drive for multiple purpose extensions. In addition to saving images and links directly to Google Drive. Users can also use Google Drive to take full screenshots of the page to the website. For this, open the page you want to capture and then click on Save to Google Drive extension.   

Color-coded folders for a better arrangement

Color coding for your folders can help you better arrange your Google Drive. Give particular colors to folders as per your preference. For example, important folders can be red, personal folders can be yellow, and so on.

Extract text from any image

Google Docs gives a built-in OCR reader this means you can extract text from an image without any effort. You just have to find an image in Google Drive or, upload one for that purpose. Right-click and select open with Google Docs. Google Docs will open an image in a new document and automatically will extract all the text from it and will write it down just below the image.

Add-ons like charts, diagram tools, math equation formatting, sign a document, and more.

Google Drive also comes with add-ons that permit the user to add other features that Google does not natively offer. For example, you can link Google Calendar, Tasks, Google keeps to Google Drive. There’s also a built-in add-on explorer where you can find tons of other add-ons like DocuSign which lets you sign a document and more.   

Also read:

Best Free Email Service Providers

How to send and receive money from WhatsApp?

Whare are the Google Drive features that make your life easier?


15 useful features of Gmail that you may not be aware so far

Gmail is unique and mostly used email services worldwide. As per information, the Google email service had 1.5 billion active users in the year 2021. Google email has all features i.e. Mode, to Passcode, to recalling sent emails, to sending mail without internet connection. And more new features added in the past 17 years. What are the features of Gmail let’s see? 

Mute emails with long threads to reduce intrusions

 A lively email thread can be provoking, especially if it doesn’t concern you directly. Gmail has a function that allows users to opt-out of such a thread. The function is known as Mute. You can directly open the email thread and tap on the three dots at the top right and choose the Mute option.

This will shift the conversation to archive and will work for even new messages arrives. In such a case, when you want to check or read such messages, later on, you can head to the Archive section and unmute them.

Auto-advance for a better and categorized Gmail

Deleting and checking mail can be a mind-numbing task. Here Gmail’s default setting of taking the user back to the opt-out inbox after deleting each email will further quite painful. By enabling the Auto-advance feature you can reduce your efforts. The function permits users to move directly to the next email (older to newer) in the list after the user has deleted, archived, or muted an email.

To use this, go to Setting, Advance-Enable Auto Advance> Save changes

Now, go back to setting >General >Scroll down to Auto Advance and choose to go to next (Newer) conversation >Save changes.

Send heavy attachments Via Google Drive

Gmail by default permit users to send attachment up to 25MB file size. Nevertheless, you can use Google Drive to send large attachments. You have to first upload the file to Google Drive and then click on the Drive icon in the Compose section and attach the file.

Extended search option

Gmail is a Google product that will be incomplete if it does not offer a global search option. Gmail has an advanced search function that permits users the option to personalize their searches based on the sender, receiver, keywords, date, and more.

To ingress the advance search, click on the Setting icon on the right side of the search bar.   

Increase call back time for the emails to 30 seconds from 5 seconds   

Undo send (Recall a send email) is an old function in Gmail. By default, this function gives users 5 seconds window to recall an email. Nothing less, there is an option to increase this to the 5-second window to up to 20 seconds.

To use this, go to the Setting > General >Undo send > choose 30 from the drop-down menu.

You also have 10 and 20 seconds options.   

Use Gmail Nudges to recall you of important emails

Gmail Nudge function is line up to recall users to reply to important emails or follow up on important emails.

To enable this head to Setting >General>Nudges.

Here two options are visible— Advised emails to reply to and advised email to follow up on. Enable them both if you want to get a notification timely by Gmail regarding a reply or follow-up. 

Schedule an email

Gmail also has an email scheduling feature where you can compose an email and schedule to send whenever you want as per date and time.

To schedule an email, you have to compose an email and tap on the down-arrow beside the send button, and choose the schedule and send. Then select date and time from the presser options or choose your own by clicking on the pick date and time option.

Smart Compose features to help you write emails faster and easily

The Smart Compose function in Gmail is lined up to help users write faster. It is powered by machine learning and offers suggestions as one type. Smart Compose is a Google Account-level setting. Changes to the Smart Compose setting are applied where your email id is signed in any of the devices.

You can enable it by heading to Setting >General >Smart Compose.

Another fascinating feature of Gmail that is not known is that you can create a Task directly from Gmail. To use this all you need to do is right-click on any email and select the option Add to tasks.

​Set passcode and expiry to Email with Confidential Mode

Confidential Mode is a feature that already exists in Gmail that adds a layer of security to email and attachments to secure sensitive information from Known or unauthorized access. With confidential Mode, you can set an expiry date or reverse access at any time. You can also set a Passcode to the emails sent in Confidential Mode.

To use this Confidential Mode, Click on the confidential Mode icon (represented with a clock and lock icon) >select expiration date and SMS Passcode (in any case which you want) and then send the mail. You can set an expiration date between 1 to 5 days.

Gmail also offers offline access mode. This means is that you can read, reply to, and search Gmail without an internet connection All you need to do is enable the function and bookmark Make a note that the feature only works with Chrome.

To enable open Setting > Enable offline mail.

Save attachments directly to Google Drive and access them from anywhere

 Gmail attachments can be directly saved to Google Drive. Just scroll down to the attachment section and rather than the Download icon (down arrow) click on the Drive icon.  

Make Google Translate work rightly inside your Gmail account

Google Translate function can work right inside your Gmail account. It permits the user to translate the entire email into the language required.  For this, you just need to open the email which you want to translate and then click on three dots on the right side of your page. Choose the Translate message tool. Then a new bar will be visible at the top of the mail body from where you can choose your preferred language in which you want to translate.

Create labels to better manage their inbox

Labels are better than folders when it comes to arranging an inbox. While they work as folders, the only major difference is that you can assign multiple labels to an email and later on you can find the labels by clicking on labels from the left panel. Labels can also be used to search an email or track email. Also, users can arrange emails as per their requirements which they want to follow up or read later.  

Enable the Reading Pane to better utilize the screen space

Use Gmail’s reading pane, click on the settings > see all settings >scroll down to the reading pane. Here you have to choose between multiple layouts. The recommended ones are right or Below the inbox.

Also read:

How to Send Confidential and Secure Email using Gmail

What are the features of Microsoft Outlook 365?

New features coming on WhatsApp | It will be more fun to use the app.

Instant messaging app WhatsApp keeps on releasing many new features to enhance the experience of the users. Because of this, it is more than another instant messaging app…

With this, users will get an even better experience on this app. For this, the company is also testing many new features. Here you will find some similar features of WhatsApp…

But there are more than 2 billion active users. WhatsApp compresses photos and videos so that the system does not crash and messages are delivered without interruption. L…

With this, users will be able to send photos in high-quality too. For this, users will have three options. Users were given the option of Best Quality, Data Saver, Auto…

Can send images as stickers

The use of this feature is going to be fun for many people. According to WABetaInfo, WhatsApp is working on this feature. From this…

Can send images as stickers

The use of this feature is going to be fun for many people. According to WABetaInfo, WhatsApp is working on this feature. From this…

Deleted Chats can be recovered on WhatsApp, it is a very easy way

If you have also accidentally deleted an important chat, then there are some ways with the help of which you can get your deleted chat back. Let’s know about the two types of methods…

WhatsApp is the most used instant messaging app in the world. We often delete our WhatsApp chat by mistake or sometimes we delete it knowingly, for which user’s later regretted it. We know we can’t get it back, but there are a few ways you can get your deleted chat back. Here, we’re going to tell you two ways to access your deleted chats, which can make many of your tasks easier.

How to Recover WhatsApp Chats: Trick 1:-

 Although all your chats are visible in real-time, you should know that you always have a backup of your chats in your mobile phone and you can retrieve it at any time. This feature is available only for Android users.

1. Open the file manager on your phone.

2- Open the WhatsApp folder in File Manager and click on Database. This folder contains all the WhatsApp backup files.

3- Edit the name of the msgstore.db.crypt12 file by short pressing on it.

4- Name it msgstore_backup.db.crypt12. This process is to avoid replacing it with a new file.(Also read- Samsung’s budget smartphone with 6000mAh battery is getting cheaper than before, will get 48MP camera)

5- Now, rename your latest backup file msgstore.db.crypt12

6- Now go to Google Drive and delete your WhatsApp backup.

7- The next process is to uninstall your WhatsApp and reinstall it.

8- As soon as you click on Restore, after selecting msgstore.db.crypt12 here, select ‘Restore’. Now you are all set. Now you will be able to check your deleted messages.

WhatsApp Chat Recover: Trick 2

You can also recover your WhatsApp chats in other ways. This method can be used by both Android and iOS users. After uninstalling your WhatsApp, reinstall it on your smartphone. When you reinstall it, it will ask permission to restore the backup from iCloud or Google Drive. Now you can restore your backup and check your deleted messages.

If you use WhatsApp, do not make these mistakes even by mistake, otherwise, you can get stuck in big trouble

Instant messaging platform WhatsApp has become an important part of everyone’s life today. The reason for this is that this app has made a place in the lives of all of us. The advantages of WhatsApp are many, but this app also has many disadvantages. We know a lot about the advantages of WhatsApp but not about the disadvantages.

In such a situation, you need to take care of some things while using WhatsApp. Today we are telling you about some such mistakes related to WhatsApp, by not doing it on WhatsApp, you can save your details from leaking as well as avoid going to jail. Let us know about these mistakes in detail:

Never save unknown people’s number

Many times we save the number of the cab, delivery boy, or any service person at that time, and forget to delete it later. In such a situation, on WhatsApp, that person also sees our status from our profile picture. In such a situation, such information of ours goes to those people. That’s why never save the number of unknown people.

Never send porn videos, you can be jailed

Sharing pornographic content on WhatsApp can land you in trouble. If someone reports your account on WhatsApp then WhatsApp can ban your account and also file a police complaint as per its policy. In such a situation, a porn clip can force you to go to jail.

Never give too much information in the profile photo

Whether someone is in your contact or not, your profile photo can be seen by everyone. In such a situation, it is very important to be careful while putting your photo on WhatsApp. You should avoid putting a photo in which the name of a society or your society is visible, and do not put a clicked photo next to the car or bike in which your car-bike number is visible.

Activate the required two-step verification

This is a very important feature of WhatsApp. Through two-step verification, you have to set a 6-digit PIN. This PIN will be required to log in to WhatsApp with your number on any new device. Also, this pin can be asked in the middle. In this era of cyber fraud, keep WhatsApp’s two-step verification active.

Do not forward such messages

We keep getting many types of messages on WhatsApp. Before forwarding any information or news, make sure to check that it is not fake news. Along with this, many fake links are forwarded in the name of free offers and government schemes. Avoid pushing them. Apart from this, do not send messages promoting hate speech against any religion or community.

 Check Your WhatsApp Status Privacy Settings

Whenever you enter a status, do not share it with everyone, share it only with your friends and family members. Because the number of many such people is also saved in our phone with whom it is not necessary to share the status.

Turn off auto-backup

There is an auto-backup feature in WhatsApp, which backs up your messages to Google Drive or iCloud. However, it is worth noting that after reaching the message here, if someone hacks your Google or Apple account, then they can get your chatting. So it would be better to always export the chat and save it in a safe place.


What are the Fraud Android apps which are stealing banking information from mobile phones?

Today’s world is so fast they want everything to be working very fast. Due to this people are using online banking or net banking regularly. The Indian Computer Emergency Response Team (CERT-In) is an admonition to Android Phone users with a new malware called Drinik that strives to steal online banking login credentials.

US government launches security review of acquisition of TikTok

The Trojan campaign is said to be attacking more than 27 Indian banks including major public and private sector banks. Here is everything you get in knowledge.

What is the new Drinik Android malware attacking online banking users?

CERT-IN is guiding people, Drinik Android malware is attacking Indian banking users and is extending in the personate of Income Tax refund. It’s a banking Trojan that is capable of hacking screens and induces users to sensitive banking details.

How the new Trojan does get installed on the Android Phone?

As per the study of CERT-IN, The victim receives the massage or SMS mentioning the link of hacker website (Similar to the income tax department, Government of India) Where the user is asked to enter personal information and download and install the spiteful APK file to complete verification. This spiteful Android app disguise as the Income Tax Department app.

Let’s study what personal data gets stolen by Drinik

The data include full name, PAN, Aadhaar number, address, date of birth, mobile number, email address, and financial details like account number, IFSC code, CIF number, debit card number, expiry date, CVV, and PIN.

How did all details get stolen by the Trojan?

After the personal information is entered by the user, the app states that there is

Income Tax amount that could be transferred to the user’s bank account. When the victim enters the amount and clicks on “Transfer” the application displays the error and shows the fake update screens. While the screen for installation update is shown. Malware in the backend sends the victim’s details information including SMS and call logs to the hacker’s machine.

How to stay safe or beware of the hackers; Disable the App download from unknown sources on your Android Phone setting.

CERT-IN suggests limiting your download sources from official app stores only, such as device manufacturers or operating app stores like Google play reduce the risk.

Verify app permission before installing an app

Verify app permissions and give only those permissions which have applicable context for the app’s motive. Do not permit the “Untrusted Sources” checkbox to install side-loaded apps.

Things should you avoid staying safe from hackers.

Do not visit untrusted websites or follow untrusted links and exercise precaution while clicking on the links furnished in any unrequested emails and SMSs.

Always look for doubtful numbers that don’t look like real mobile phone numbers. Fraudulent often cover their identity by using email to text services to avoid their actual phone number.

 Practice being careful towards shortened URLs, Such as those involving bit lay and viral.

Users are suggested to float their cursors over the shortened URLs (If possible) to see the full website domain which they visit or use a ‘URL checker that will avoid the users to use flake or hackers short URL and view full URL.  Users can also use the services which are meant for a full preview of the URL.  


How to install 3par Virtual Service Processor 4.4

Service processor is a very useful tool to monitor the and respond to 3par storage activity, reports and incidents occurring round the clock.

3par Service Processor has many advantage to manage and monitor the 3par storage system for day to day activity.

Here we’ll discuss the installation of 3par Virtual Service Processor step by step.

3par Virtual Service Processor needs to be downloaded from the hpe website :

You need to have a passport account to log in to the hpe software site. Once you log in you will get options to download all the software related to the hpe 3par.

Now you need to download either hyper-v or ESX vmware Virtual Service Processor ISO and prerequisite needs to made available. It can be installed as a virtual appliance. The downloaded file will appear like below:

HPE 3PAR Service Processor is a support device implemented as an appliance on either a Physical or Virtual environment

Virtual Service Processor versions:

HPE 3PAR Virtual Service Processor : Virtual SP 4.1

HPE 3PAR Virtual Service Processor : Virtual SP 4.2

HPE 3PAR Virtual Service Processor : Virtual SP 4.3

HPE 3PAR Virtual Service Processor : Virtual SP 4.4

HPE 3PAR Virtual Service Processor : Virtual SP 5.0

Post downloading the virtual SP ISO image you can double click on it / extract to find out the OVF file

Now double click on the OVF file and click Import. Now it will build a virtual appliance and to power on. The file name would mount in the Esx as – rhel_Vsp-4.4.0.GA-22

Post importing the virtual appliance is ready to power on. While powering on it will load and go through many installation phases:

Finally it comes to log in screen:

Here we need to use the detault login account as 3parcust and password 3parInServ.

Note: Default user: 3parcust, password: 3parInServ / user: admin, password : 3parInServ.

When you log in with 3parcust it may say – The SP Moment of Birth has not been completed!! Please login as root and answer the questions!!!! Press enter to continue.

Now press enter and enter username as root:

Then follow the onscreen instruction:

Now you need to enter the 3par serial number which you want to connect to service processor:

Enter your network details and follow the further instructions.

You can choose a country of your choice:

Site specific information also needed:

Now you can complete the MOB of Virtual Service Processor. Please note that some of the configuration or information may not pass however all those configuration can be done later as well. So please go ahead and complete the MOB.

It will show the configuration summary which we should not be worried about even something shows failed as it can be configured later as well.

Now it would reboot again and prompt to log in. You can log with user : 3pacust and password: 3parInServ.

Now you can perform additional Service Processor configuration:

If you choose 1st option then inbound / outbound related network can be configured:

If you want to disable the Service Processor firewall or modify the settings you can choose the 2nd option:

Now you can try to access the SPOCC (Service Processor Onsite Customer Care) which the web interface of the virtual and physical service processors in 3par storage. For example if you have given the IP for the virtual Service Processor as then you can use : and username : 3parcust, password: 3parInServ:

Post login successfully you can see below screen:

Now you can perform several operation related to 3par such as:

  • Monitoring
  • Log collection
  • HPE support engineers can access 3par remotely
  • Manage 3par using Service Processor without direct log in to 3par arrays.
  • Perform Service Processor OS upgrade
  • Perform 3par OS upgrade
  • Enable call home feature which a continuous monitoring of 3par storage systems by IRS servers.

Also check –

How to replace a failed disk in 3par storage?

How to install HPE 3par Virtual Service Processor 5.0


What is Veritas NetBackup and How does NetBackup work?

In NetBackup concepts we’ll discuss the common terminology and how backup and restore functions.

We know that one of the import part of NetBackup is backup policy and it has some different options to choose to create a backup job (backup policy) such as Attributes, Schedules, Clients and Backup Selections. Lets looks at these options below:

NetBackup policies: the heart and soul of NetBackup

The first option is Attributes which talks about how the backup will be performed like and where the backup images will be saved. The Schedules talks about what types of backups will run like Full backups / Differential Incremental backups / Cumulative Incremental backup, when the backup should run and how long the the backups data will be retained.

The Clients option is a list of clients and which clients should be backed up in this policy. The Backup Selections tells us which client or data should we backup in the policy. All these information is stored in a centralized way in the NetBackup Master server.

Note. Latest NetBackup version may have more backup policy options and additional tabs or additional settings for specific product such as Vmware, Oracle besides the above for options. For more information please visit link:

Backup storage types

While initiating a backup job to send the client’s data to storage device NetBackup collects all these information and it is called backup image. NetBackup takes this backup image and sends it to a backup storage destination such as a disk, tape or OST storage. It can also be SAN storage or NetBackup appliance or third party appliances.

Another backup destination is cloud storage and NetBackup supports a verity of cloud vendors as well.

The NetBackup Catalog

Catalog is a very critical data of backup application and it contains information about NetBackup environment and index about all the data that have been backed up and can be restored. The catalog is kept in the Master server.

The NetBackup Catalog is broken up into two sections :

The NetBackup database (NBDB) – The NBDB is a proper relational database and stores information about media and device data which also known as EMM (Enterprise Media Manager) and also stores a lot of other information as well.

NetBackup configuration files – This stores information about the NetBackup environment however it is not stored with relational database. It keeps some add hoc files information like policy, schedule, attributes etc.

One of the largest part of database is image database which is partially stored in NetBackup database and partially in NetBackup configuration files. Also, it is configuration file which takes largest part in the catalog area. The catalogs file keeps growing as the NetBackup gets older in an environment.

The data backup process

Lets discuss how the data backup works in NetBackup. So the way we perform backup it stores centrally in the Master server within the backup policy. When the NetBackup scheduler runs to determine that it is a time to run a backup > a media server is chosen to perform that backup > Now the media server will read the backup from the client machine > and send that data as an image > to the storage destination be it a disk, tape or other storage. The media server can be the same system which is a Master server as well.

During the backup the backed up data information is sent to the Master server and that information is going to be stored as catalog in the master server. Now when the backup is complete we can see the result of the backup through the job logs to find out if it has failed or successful. If failed then why it has failed.

The data restore process

During the restore process, the restore job is initiated by a backup administrator from the Master server or by a client > The restore job connects to the Master server and initiates the restore job > Now the Master server assigns a media server to perform the restore of actual data > Now the backup image on the storage device is read by the media server and actual data to be restore is sent to the destined client > As the process is progressing we can see the status of the job in NetBackup logs and reports.

For more information please visit : > Backup and Recovery > NetBackup

NetBackup clients and agents

The above is the list of clients and agents supported. The first three clients are the Standard clients, Enterprise clients and Application and Database Pack. Other client like Enterprise Vault does not require any additional license. It is advisable to check Veritas NetBackup website for current and updated supported agents.

NetBackup Options

NetBackup also provides a of comprehensive selection of performance, security and storage management options to match your backup and recovery need in your environment.

Some of the options are listed here including data protection and optimization option which provides deduplication and acceleration functionality. The NetBackup shared storage option provides a shared tape drive across media servers.

Related post – How to install NetBackup Administration console?