In an enterprise storage technology, we mostly have three options to choose from – which are Storage Area Network (SAN), Network Attached Storage (NAS) and Direct Attached Storage (DAS). All these three have their own advantage and disadvantages.

Options for connecting computers to storage have increased dramatically in a short time. This chapter introduces the major storage networking variations: –Direct attached storage, or DAS, –Network-attached storage, or NAS, and –The storage area network, or SAN.
DAS, NAS and SAN

Businesses can choose among three storage architectures to suit their requirements. Each architecture has certain advantages and disadvantages:
– DAS is a storage device with a dedicated, parallel connection to a server, typically using SCSI.
– NAS storage devices connect directly to the LAN through an Ethernet port. LAN devices use TCP/IP to communicate with their network peers.
– A SAN is a dedicated network that provides storage to enterprise servers. It is typically configured using switches and Fibre Channel connections.
Direct Attached Storage
– The traditional method of locally attaching storage to servers through a dedicated SCSI communication channel between the server and storage
– Storage for each server is managed separately and cannot be shared
– DAS supports disk drives, a RAID subsystem, or another storage device

DAS is the traditional, non-networked method of locally attaching storage to servers through a dedicated communication channel between the server and storage.
The server typically communicates with the storage subsystem using a block-level interface. The file system resides on the server and determines which data blocks are needed from the storage device to complete the file request from an application.
Network-Attached Storage
– NAS provides a file-level access to storage systems
– NAS devices are: –Server-independent
– Used to off-load storage traffic to a single, dedicated storage device

NAS servers provide a file-level interface to storage subsystems. Because NAS devices are server-independent, they complement and help ease the burden of overworked file servers by off-loading storage to a single-purpose, dedicated storage device. NAS devices have an operating system that is optimized for file sharing and does not run general server applications, eliminating the major cause of downtime.
NAS devices are perfect for storing unstructured data such as files that are manually created by users.
Fibre Channel Storage Area Network
Dedicated network that provides access to consolidated, block-level data storage
– Special switches are used to connect storage arrays with servers and with each other
– Network communication uses the Fibre Channel protocol, which was specially developed for the transport of files
– This protocol is reliable, with speeds up to 16 Gbit/s
– FC SAN components allow for high levels of redundancy and resiliency

The need for a separate network for storage purposes only was evident toward the end of the nineties. The new storage area network infrastructure consisted of its own cabling and further development of the SCSI protocol. SCSI was already being used for the connection of devices such as storage arrays or printers to a server. The new development became known as Fibre Channel.
The Fibre Channel protocol was specially developed for the transport of files. It is said to be reliable, and it recently even outperformed Ethernet, with a transport speed of sixteen gigabits per second.
By design, a SAN should provide redundancy and resiliency: –Redundancy is the duplication of components up to and including the entire fabric to prevent a failure of the total SAN solution. –Resiliency is the ability of a fabric topology to withstand failures.
What should you know before – SAN Consideration?
When designing SAN solutions, consider the following:
–Scalability (number of FC ports and expansion capability)
–Storage capacity, efficiency, and cost
–Availability of the fabric, systems, and data
–Performance
–Remote replication of data
When planning and operating a SAN, you need to consider several factors.
–First, a SAN allows for great scalability, but increasing the size of a solution increases its price and complexity. You should consider any future expansion requirements in terms of the number of ports, connected systems, and arrays.
–Second, storage capacity, efficiency, and cost should be balanced to properly match the solution.
–Third, the availability of the fabric, systems, and data should be considered at an early stage of the SAN design. A SAN is often used to achieve no-single-point-of-failure configurations.
Generally, a SAN outperforms NAS and DAS solutions, but the SAN solution should be carefully balanced for the optimal performance.
A SAN plays a crucial role in keeping a business running by providing protection from unpredictable events such as natural disasters or complete site failures. SANs provide the tools, methods, and means to replicate data from a primary site to a secondary, remote site.
Comparing SAN and NAS

The major difference between a SAN and NAS is that a SAN is a separate network, away from the company LAN. The SAN is configured to allow servers to communicate with storage arrays, typically using Fibre Channel. NAS requires a dedicated storage device, typically an optimized server with a number of RAID storage drives that are attached directly to the network.
Both options have their strengths and weaknesses, with the primary advantages of a SAN being the major weakness of a NAS solution, and vice versa.
The benefits of SANs include network speed, reliability, centralization, and data protection.
The main strengths of NAS are interoperability, a lower total cost of ownership, and its relative simplicity.
Comparing DAS, NAS and SAN

Note the position of the network in this diagram. In the case of DAS and NAS, the file system resides at the level of the storage. Based on the fact that data is visible in the form of the file system, NAS is good for sharing files between devices and operating systems. File system features make it easy to assign access permissions to the stored files.
In contrast to NAS and DAS, a SAN works at the block level. The file system is created and maintained by the operating system. To the operating system, the storage space that is accessible through the SAN looks like a regular block device such as an internal hard drive or a tape device.
How to choose between SAN, NAS and DAS?

When helping your customer to decide if they should use SAN, NAS, or DAS, it is important to focus on their specific storage needs and their long-term business goals. One of the key criteria to consider is capacity. This is the amount and “type” of data, either file level or block level, that needs to be stored and shared.
Other criteria to consider are:
–The I/O and throughput requirements for performance,
–The scalability and long-term estimates for data growth,
–The storage availability and reliability, especially for mission-critical applications,
–The data protection needed as well as the backup and recovery requirements,
–The quantity and skill level of the available IT staff and resources, and
–Any budget concerns of the customer.

Tiered storage is essentially the assignment of different categories of data to different types of storage devices. These categories can be based on the levels of protection needed, the performance requirements, the frequency of use, the cost, and other considerations that are unique to the business.
The data in a tiered-storage configuration can be moved from high-cost to low-cost storage media, from slow to fast storage media, or from archive to near-online or online storage media.
What are Storage Area Network (SAN) Components?
The physical components of a storage area network can be grouped in a single rack or data center, or they can be connected over long distances. Servers do not provide SAN connectivity out-of-the-box. To connect to the SAN, a server needs a host bus adapter.
This chapter discusses the basic SAN components and their boot order.
Identifying SAN Components
–Host
–Servers
–HBAs
–Fabric
–Hubs or switches
–Routers
–SAN software
–Fibre Channel cables
–Storage
–Storage devices
–Backup devices

Fibre Channel SAN environments enable the development of solutions that provide high performance and high availability, which are the fundamental requirements of a storage network.
Fibre Channel devices effectively combat the bandwidth-related problems that generally occur during bulky operations such as backup and restore operations.
A wide range of hardware and software products comprise a SAN. The hardware components offer different features to provide for a range of SAN sizes, from a small SAN to a high-speed, high-volume data center SAN.
The common SAN components are used in four layers:
–The client layer contains the client systems that are using the storage services.
–The host layer includes the servers with their host bus adapters.
–The fabric layer includes Fibre Channel hubs or switches, routers, SAN software, and Fibre Channel cables.
–And the storage layer includes storage and backup devices.
Host Component (Initiator)

Host components consist of servers and other devices that enable servers to connect to the SAN. Generally, servers do not have Fibre Channel ports. Hardware devices that provide the Fibre Channel port and perform digital-to-optical signal conversion are called host bus adapters. HBAs are available in the form of PCI cards for rack-based servers and mezzanine cards for server blades. HBAs often provide more than one Fibre Channel port for SAN connectivity.
The operating system requires the appropriate drivers to support the HBA. HBA drivers are not universal; each hardware manufacturer provides its own drivers for the operating systems its devices support.
The software component that is used to aggregate throughput, provide load balancing, and enable failover in the case of a communication failure is called multipath software. On a Microsoft Windows platform, that software is Microsoft Multipath I/O, or MPIO for short.
HBAs

Servers typically do not have Fibre Channel connectivity embedded. To connect servers to the SAN, you must use dedicated hardware called the host bus adapter.
Fibre Channel HBAs are similar to the network interface cards used in LANs and other non-SAN networks. They replace the traditional SCSI cards used to connect SAN devices such as servers and storage.
HBAs can come in the form of a PCI card for rack- or tower-based servers or a mezzanine card for high-density server blades.
Disk Arrays (Target)

Disk arrays are considered to be targets in a SAN. To communicate over the SAN, disk arrays are equipped with dedicated connection points called “ports.” To increase availability and enhance performance, disk arrays come with a minimum of four Fibre Channel ports.
Disk arrays are designed and built to run for long periods that is measured as the “uptime.” The most advanced disk arrays can achieve up to five-nines of uptime during the year, which translates to a little more than five minutes of downtime.
Disk arrays are usually connected to an uninterruptable power supply to protect the system from power outages. But even if the UPS fails, disk arrays are usually equipped with a dedicated battery that preserves the cache content when a power outage occurs. When electricity becomes available again and the disks start spinning, the controllers flush the cached data to the hard drives, preserving the data integrity.
To ease the management and administration of a large number of drives, storage array virtual drive images can be “frozen in time” as snapshots, or seamless copies of those virtual images can be made through cloning. Modern disk arrays can work with hundreds of these snapshots and clones without a performance penalty.
Although disk arrays provide high levels of data availability within a rack, they cannot protect that data from extreme events such as natural disasters or complete site failures.
Other technologies are available to replicate the data to remote locations under those conditions. Disk arrays are designed to facilitate seamless and reliable replication of data over long distances to provide data integrity and disaster recovery.
Interconnect Devices

A Fibre Channel switch is a network switch that is compatible with the Fibre Channel protocol. These switches can be combined to create a fabric that allows many-to-many communication while maintaining throughput and providing redundancy with minimal latency.
Two types of Fibre Channel switches are available:
–Fabric switches are predominantly used to implement the switched fabric topology.
–Directors are the most expensive types of switches, but they offer the best performance and maximum reliability. The average annual downtime for a director is barely five minutes.
What is SAN Boot Order?

To properly boot SAN components, apply the following boot order:
–First, power on the SAN fabric and wait for the switches to finish booting. If you do not wait for the boot process to finish, the fabric login attempts might be denied.
–Second, power on the storage array and wait for the disk array ports to log in to the fabric.
–Third, boot the host systems, and verify that your target drives are visible.
To shut down a SAN configuration, complete these steps in the opposite order.
Also Learn:
What is a Hard Disk Drive (HDD)? Disk Technology
How to Add iSCSI Storage to Datastore in Vmware ESXi 5.5
How to install HPE 3par Virtual Service Processor 5.0
How to backup NDMP Filer (NetApp Storage) in Backup Exec 20
What is DiskPool in NetBackup?