In a Storage Area Network (SAN), there are multiple components used to form a Storage Area Network and Storage array is one of the most important parts of it.

A hard drive, also known as a hard disk drive or HDD, is a fundamental part of modern computers. Functioning as an internal storage device, the HDD enables a computer to house and execute important files and programs.
At HPE, the hard drive family is divided to three categories: Entry, Midline, and Enterprise. These categories meet the needs of different environments for performance, reliability, and a healthy ratio of cost and capacity.

Entry drives have the lowest unit cost and give you a basic level of reliability and performance. They are best suited for non-mission-critical environments where I/O workloads are forty percent or less. They are also used for internal and archival storage or as boot drives for entry-level servers. Entry drives are only available with a Serial ATA, or SATA interface.
Midline drives give you larger capacity and greater reliability than Entry drives. Midline drives are more resistant to rotational and operational vibration, so they are better suited for use in multiple-drive configurations. Midline drives are available with both SATA and Serial Attached SCSI interfaces. Serial Attached SCSI is typically shortened to “SAS”.
Enterprise drives give you maximum reliability, the highest performance, scalability, and error management under the most demanding conditions. They are the only HPE drives designed for use in unconstrained I/O workloads. They are for mission-critical applications such as large databases, email servers, and back-office.
There are multiple types of Hdd and Disk technologies available, below are the widely used disk technologies:
Characteristics of drives
Form factor
– Small form factor (SFF) —2.5-inch
– Large form factor (LFF)—3.5-inch
Drive capacity
– Depends on number of platters the drive contains, the surface area of each platter, and the areal density
(the number of bits that can be stored per unit area)
– Expressed in gigabytes
Disk drive performance
– Depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read/write bandwidth, the queuing strategies, and the interface technologies
Reliability
– Measured in terms of Annual Failure Rates (AFRs)
The basic characteristics of industry-standard drives are form factor, drive capacity, performance, and reliability.
– Regarding the form factor, HPE drives for servers are available in a two-point-five-inch small form factor and a three-point-five-inch large form factor. In general, SFF drives give you greater power and space savings. These drives can require as little as half the power and generate significantly less heat than LFF drives. LFF drives are better suited for implementations that require large, single-drive capacities and a lower cost per gigabyte.
– Drive capacity depends on the number of platters the drive contains, the surface area of each platter, and the areal density (the number of bits that can be stored per unit area).
– Disk drive performance depends on the rotational speed of the platters, the seek performance, the mechanical latency, the read-write bandwidth, the queuing strategies, and the interface technologies.
– Drive reliability is measured in terms of Annual Failure Rates. The AFR is the percentage of drive failures occurring in a large population of drives operating for one year. With an AFR of one-point-five percent, one hundred thousand drives would experience approximately fifteen-hundred failures per year.
–Keep in mind that an AFR calculated from a small number of drives would be subject to large statistical variations that make it less reliable than an AFR from a larger sample.
Drive interconnect technologies
The technology to connect one or more drives to a computer system has transitioned from parallel bus data interfaces to serial interfaces
– Parallel interfaces:
–ATA—Advanced Technology Attachment
–IDE—Integrated Drive Electronics, also called PATA, Parallel Advanced Technology Attachment
–SCSI—Small Computer System Interface
– Serial interfaces:
–SATA—Serial ATA
–SAS—Serial Attached SCSI
The technology used to connect one or more drives to a computer system has transitioned from parallel bus data interfaces such as Advanced Technology Attachment, Integrated Drive Electronics, and the original SCSI interface to the SATA and SAS serial interfaces.
Each drive with a SATA or SAS interface has its own high-speed serial communication channel to the controller.
Parallel SCSI
–A SCSI standard established by ANSI in 1986, but still evolving
–The Common Command Set (CCS) was developed in parallel with the ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4 standards
–The SCSI-1 standard was too permissive and allowed too many vendor-specific options
–The result was incompatibility between products from different vendors, which made for confusion on: –Speed and feed: Fast, Ultra, Ultra2, narrow, and wide –Command sets: Common Command Set, Enhanced Command Set –Termination: Passive, Active, Forced Perfect Termination
–Ultra320 and Ultra640 (AKA Fast-320) are the last offerings
In addition to a physical interconnection standard, the Small Computer System Interface, or SCSI, defines a logical command set standard that all drive devices must adhere to. The Common Command Set was developed in parallel with ANSI SCSI-1, SCSI-2, SCSI-3, and SCSI-4, which include the revised CCS as part of the standard. The commands depend on the type of device being used.
SCSI-1 initially defined command sets for six device types; however, the standard was too permissive and allowed too many vendor-specific options. The result was incompatibility between products from different vendors.
A CCS was defined to solve the SCSI compatibility issues. It was a subset of the standard and did not allow for exceptions. With the CCS, SCSI-1 began to penetrate the server disk subsystem and tape backup market in the late 1980s.
SCSI-2 targeted the drawbacks of SCSI-1 and introduced support for 10 device types. SCSI-2 also introduced more efficient command sets that improved functionality by including disconnect and command queuing options.
Serial ATA (SATA)
–Hot-plug and Native Command Queuing (NCQ) support
–Transfer rates up to 300 MB/s for SATA2 and 600 MB/s for SATA3, using half-duplex
–SATA3.1 introduced support for Solid State Disks (SSD) and the Zero-Power Optical Disk Drive
–SATA3.2 combines SATA commands with the PCI Express interface to achieve device speeds
up to 16 Gb/s
–Mean Time Before Failure (MTBF) is 1.2 million hours
The Serial ATA, or SATA, standard is a direct replacement for the older Advanced Technology Attachment standard. Compared to ATA, the SATA interface offers a reduced cable size, with only seven conductors rather than the 40 conductors required by the ATA standard, as well as hot-pluggable drives and faster and more efficient data transfer rates through the optional I/O queuing protocol called Native Command Queuing, or NCQ.
The SATA3.1 standard introduced support for Solid State Drives and a Zero-Power Optical Disk Drive. The Zero-Power Optical Disk Drive reduces the power consumption of SATA optical disk drives to zero when the device is idle, preserving the energy.
To further increase the transfer speeds, SATA3.2 combines the SATA commands and the PCI Express interface to boost the maximum theoretical data speed to 16 gigabits per second, compared to the 6 Gb/s that is available on current drives.
Serial Attached SCSI

–SAS uses the full-duplex architecture, effectively doubling the transfer speeds
–The current SAS standard provides speed of 12 Gb/s, with a maximum theoretical speed of 16 Gb/s
–The maximum number of attached devices is 128 (compared to 16 for Parallel SCSI)
–A single SAS domain can address up to 65,535 devices using a fanout expander –The MTBF is increased to 1.6 million hours
Serial ATA uses a half-duplex, serial connection to devices rather than the original parallel connection of ATA. SATA still uses the ATA command set, which is simpler but provides less robust functionality than the SCSI interface used with SAS.
The SATA interface has gone through three major generations. –The one-point-five gigabits per second version was targeted at replacing ATA in the desktop and consumer markets. –The one-point-five gigabits per second version with extensions was targeted for workstations and low-end servers.
This generation added native command queuing. –The three gigabits per second version was targeted for workstations and low-end servers. This generation increased the data transfer rate.
SATA is the best solution for price-sensitive, low-I/O-workload applications, and it dominates the desktop market because of its low cost and the lighter workloads of desktops.
In contrast, Serial Attached SCSI uses a point-to-point, full-duplex serial connection and the SCSI command set, which has more performance and reliability features than the ATA command set. For example, SAS devices can be dual-port.
This enables the device to access the full bandwidth of a SAS link. These additional features come at a cost, however. SAS devices are more expensive than SATA devices for the equivalent storage capacity.
The first-generation SAS supported a link speed of three gigabits per second. The current generation supports a link speed of up to six gigabits per second, or six hundred megabytes per second, in each direction.
SAS is the best solution for mission-critical, high-I/O-workload applications.
Near-line SAS
A SATA drive using a SAS interface is called near-line SAS. It provides all of the enterprise features that come with SAS, but still has the limitations of SATA for disk performance and the mean time before failures.
What is Native Command Queuing (NCQ)?
–NCQ is a technology designed to increase the performance of SATA drives.
–Disks are enabled to internally optimize the order in which read/write commands are executed.
–NCQ is reducing the amount of unnecessary HDD head movement. NCQ is supported on the HPE Smart Array P400, P400i, E500, and P800 disk controllers.
NCQ is a technology designed to increase the performance of SATA hard disk drives by allowing an individual hard disk to internally optimize the order in which received read and write commands are executed. Without NCQ, a drive has to process and complete one command at a time. NCQ increases performance for workloads where multiple simultaneous read and write requests are outstanding, by reducing the amount of unnecessary back-and-forth on the drive heads. This most often occurs in server and storage applications.
For NCQ to be enabled, it must be supported and turned on in the controller, and in the hard drive itself.
NCQ performance gains


NCQ provides 8.8 percent faster performance in generic PC HDD read throughput and 9 percent faster performance in generic PC applications over non-NCQ systems.
What are SAS Domains?

Two types of expanders are used in the SAS topology: fanout and edge.
The server-attached storage market will typically use edge expanders, which can address up to 128 SAS addresses or drives in a segment. When a fanout expander is incorporated into the architecture, up to 128 segments can exist within a SAS domain, which allows SAS to address up to 16,384 SAS physical links.
There can be only one fanout expander per SAS domain, but you can have any combination of edge expanders, initiators, or storage devices.
Solid State Drives

Solid State Drives, or SSDs, are made of NAND Flash memory modules that are connected to the host system through an interface chip that uses regular HDD communication protocols. The two types of Flash memories used today are single-level cell and multi-level cell.
The SLC and MLC Flash memory types are similar in their design. MLC Flash devices cost less and allow for higher storage density. SLC Flash devices provide faster write performance and greater reliability, even at temperatures above the operating range of MLC Flash devices.
Single-level Cell

As the name suggests, SLC Flash stores one bit value per cell, which basically is a voltage level. The bit value is interpreted as a zero or a one.
Because there are only two states, SLC represents only one bit value. Each bit can have a value of “programmed” or “erased.”
Multi-level cell

An MLC cell can represent multiple values. These values can be interpreted as four distinct states: zero-zero, zero-one, one-zero, or one-one.
Comparing SLC and MLC

NAND flash memory using MLC technology has quickly become the predominant Flash technology used in the broader market for consumer products. However, compared to SLC, MLC has the some characteristics that make it less desirable for creating the type of higher performance, high-reliability devices that are required for server storage.
For example, it has higher internal error rates because of the smaller margins separating the cell states, necessitating larger ECC memories to correct them.
It has a significantly shorter life-span in terms of the maximum number of program and erase cycles.
It also has slower read performance and significantly slower write (program) performance than SLC.
MLC NAND Flash has comparatively poor read and write performance. More importantly, the SLC Flash program and erase lifecycle is often referred to as “endurance.” It is 10 to 20 times greater than that of MLC Flash.
The higher storage density of MLC will continue making it the predominant choice for use in lower cost and lower workload consumer devices. The higher performance and better reliability of SLC NANDs are currently preferred to create the Solid State Drives that meet the requirements of server storage.
SSD wear leveling

Wear leveling is one of the basic techniques used to increase the overall endurance of NAND-based Solid State Drives.
Because NAND-based SLC Flash supports only 100,000 lifetime write and erase cycles, it is important that no physical NAND block in the memory array be erased and rewritten more than is necessary. However, certain logical SCSI blocks of an SAS or SATA device might need to be updated, or rewritten, on a frequent basis. Wear leveling resolves this issue by continuously remapping logical SCSI blocks to different physical pages in the NAND array.
Wear leveling ensures that erasures and rewrites remain evenly distributed across the medium, which maximizes the endurance of the SSD. To maximize SSD performance, this logical-to-physical map is maintained as a pointer array in the high-speed DRAM on the SSD controller. It is also maintained algorithmically in the metadata regions in the NAND flash array itself. This ensures that the map can be rebuilt after an unexpected power loss.
SSD over-provisioning
–On high-end SSDs, it is possible to over-provision by 25% above the stated storage capacity
–Distributes the total number of reads and writes across a larger population of NAND blocks and pages over time
–The SSD controller gets additional buffer space for managing page writes and NAND block erases
The overall endurance and performance of an SSD can also be increased by overprovisioning the amount of NAND capacity on the device. On higher end SSDs, NAND can be over-provisioned by as much as 25 percent above the stated storage capacity. Over-provisioning increases the endurance of an SSD by distributing the total number of writes and erases across a larger population of NAND blocks and pages over time. Over-provisioning can also increase SSD performance by giving the SSD controller additional buffer space for managing page writes and NAND block erases.
Smart SSD Wear Gauge
Although wear leveling can increase the performance levels and prolong the life of NAND, you have to remember that NAND has a limited lifetime of 100,000 reads and writes.
HPE has a utility called the SmartSSD Wear Gauge that can be used to collect information and generate reports on the current usage levels and expected remaining life for Solid State Drives. The SmartSSD Wear Gauge is provided as part of the Array Diagnostic Utilities.
What is a Disk Enclosure?
A disk enclosure is basically a chassis designed to hold and power disk drives and to provide a mechanism to enable them to communicate to one or more separate hosts.

–A disk enclosure is a specialized casing designed to hold and power disk drives while providing a mechanism to allow them to communicate to one or more separate computers
–In enterprise terms, “disk enclosure” refers to a larger physical disk chassis
–Disk enclosures do not have RAID controllers –Disk enclosures can be connected directly to the hosts
Fault-tolerant cabling

–Fault-tolerant cabling allows any drive enclosure to fail or be removed while maintaining access to other enclosures –P2000 G3 Modular Storage Array (MSA)
–Two D2700 6Gb enclosures –The I/O module As on the drive enclosures are shaded green
–The I/O module Bs on the drive enclosures are shaded red
The schematic shows a P-2000 G-3 MSA System connected to two D2700 six-gigabit drive enclosures using fault-tolerant cabling.
The I/O module As on the drive enclosures are shaded green. The I/O module Bs on the drive enclosures are shaded red.
Fault-tolerant cabling requires that you connect “P2000 G3 controller A” to “I/O module A” of the first drive enclosure and cascade this connection on to I/O module A of the last drive enclosure (shown in green). Likewise, you must connect “P2000 G3 controller B” to “I/O module B” of the last drive enclosure and cascade this connection on to I/O module B of the first drive enclosure (shown in red).
Straight-through cabling
–Straight-through cabling can sometimes provide increased performance in the array, it also increases the risk of losing access to one or more enclosures in the event of an enclosure failure or removal

–P2000 G3 Modular Storage Array (MSA)
–Two D2700 6Gb enclosures
–The I/O module As on the drive enclosures are shaded green
–The I/O module Bs on the drive enclosures are shaded red
The following figure shows a P2000 G3 MSA System connected to two D2700 six-gigabit drive enclosures using straight-through cabling.
Straight-through cabling requires that you connect P2000 G3 controller A to I/O module A of the first drive enclosure, which in turn is connected to I/O module A of the last drive enclosure (shown in green).
P2000 G3 controller B is connected to I/O module B of the first drive enclosure, which in turn is connected to I/O module B of the last drive enclosure (shown in red).
What is LUN Masking?
Logical unit number masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts. LUN Masking is implemented primarily at the host bus adapter, or HBA, level. LUN masking implemented at this level is vulnerable to any attack that compromises the HBA.
Selective Storage Presentation is a special kind of LUN masking that is available on HPE 3PAR and EVA Storage Arrays. It lets the user designate which hosts have access to which logical drives. SSP has three advantages over standard LUN masking: –First, it is enforced at the level of the storage array. –Second, it is independent of any host vulnerabilities. –And third, it is applied through the dedicated command-line interface or GUI of the storage array.

–Enables host visibility of LUNs within the storage array
–LUN granularity –Independent of zoning
–Can be implemented at the host, fabric, or array level
–Used for data security
–Selective Storage Presentation on HPE 3PAR and EVA Arrays
What is Storage Virtualization?
Virtualization is the pooling of physical disks, or parts of the physical disks, into what appears to be a single storage device that is managed from a central console.
Storage virtualization helps the storage administrator perform backup, archiving, and recovery tasks more easily and in less time by disguising the actual complexity of the SAN. With HPE 3PAR Storage Arrays, virtualization improves the availability, reliability, and performance of the array.

Storage virtualization can be created by using a software application or with hardware, or even with a software hybrid appliance.
What is Fat (thick) or thin provisioning?
Thin provisioning is a technology that optimizes disk capacity and utilization. It also saves the money normally spent to purchase, power, and cool disks for future needs.
Businesses typically over-allocate storage. For example, if an application needs five gigabits of storage today but will require 20 gigabits in the future, the business would buy and provision for 20 gigabits. And for good reason: Reprovisioning storage after a server is up and running can be complex, costly, and time-consuming. It takes time, brings down the application, and introduces the risk of human error.

Fat provisioning means buying and allocating excess storage in anticipation of the growing needs of an application. You end up paying, even more, to spin those extra disks aimlessly, cool them down, and give them square footage in the data center.
Thin provisioning ends the cycle of overbuying by enabling you to purchase capacity for the short-term while provisioning it as if you have far more. That means you can dramatically increase storage efficiency while saving money.
Thin-provisioned disks will be provisioned for full capacity (for example 20 gigabits), but will occupy only the required space on the disk (2 gigabits in this example). As the amount of data grows, thin disks will grow in size and will eventually reach the size of a fat-provisioned volume.
Also Read:
How to install HPE 3par Virtual Service Processor 5.0
How to replace a failed disk in 3par storage?
How to configure B2D storage to Data Protector?