A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to enhance accessibility of storage devices, such as disk arrays and tape libraries, to servers so that the devices appear to the operating system as locally-attached devices. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN) by other devices, thereby preventing interference of LAN traffic in data transfer.
The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.
Storage area networks (SANs) are sometimes referred to as network behind the servers and historically developed out of the centralised data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process. A SAN is a combination of hardware and software. It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data. To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device.
DAS was the first network storage system and is still widely implemented where data storage requirements are not very high. Out of it developed the network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN. Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck. Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the storage network, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent. While in a NAS architecture data is transferred using the TCP and IP protocols over Ethernet, distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.
F.A.Q about Storage Networking
What is storage virtualization?
A storage area network (SAN) is a dedicated high-speed network or subnetwork that interconnects and presents shared pools of storage devices to multiple servers.
A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This enables each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.
A storage area network is typically assembled using three principle components: cabling, host bus adapters (HBAs), and switches attached to storage arrays and servers. Each switch and storage system on the SAN must be interconnected, and the physical interconnections must support bandwidth levels that can adequately handle peak data activities. IT administrators manage storage area networks centrally.
Storage arrays were initially all hard disk drive systems, but are increasingly populated with flash solid-state drives (SSDs).
What storage area networks are used for?
Fibre Channel (FC) SANs have the reputation of being expensive, complex and difficult to manage. Ethernet-based iSCSI has reduced these challenges by encapsulating SCSI commands into IP packets that don't require an FC connection.
The emergence of iSCSI means that instead of learning, building and managing two networks -- an Ethernet local area network (LAN) for user communication and an FC SAN for storage -- an organization can use its existing knowledge and infrastructure for both LANs and SANs. This is an especially useful approach in small and midsize businesses that may not have the funds or expertise to support a Fibre Channel SAN.
Organizations use SANs for distributed applications that need fast local network performance. SANs improve the availability of applications through multiple data paths. They can also improve application performance because they enable IT administrators to offload storage functions and segregate networks.
Additionally, SANs help increase the effectiveness and use of storage because they enable administrators to consolidate resources and deliver tiered storage. SANs also improve data protection and security. Finally, SANs can span multiple sites, which helps companies with their business continuity strategies.
Types of network protocols
Most storage networks use the SCSI protocol for communication between servers and disk drive devices. A mapping layer to other protocols is used to form a network:
- ATA over Ethernet (AoE), mapping of ATA over Ethernet
- Fibre Channel Protocol (FCP), the most prominent one, is a mapping of SCSI over Fibre Channel
- Fibre Channel over Ethernet (FCoE)
- ESCON over Fibre Channel (FICON), used by mainframe computers
- HyperSCSI, mapping of SCSI over Ethernet
- iFCP or SANoIP mapping of FCP over IP
- iSCSI, mapping of SCSI over TCP/IP
- iSCSI Extensions for RDMA (iSER), mapping of iSCSI over InfiniBand
- Network block device, mapping device node requests on UNIX-like systems over stream sockets like TCP/IP
- SCSI RDMA Protocol (SRP), another SCSI implementation for RDMA transports
Storage networks may also be built using SAS and SATA technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from IDE direct-attached storage. SAS and SATA devices can be networked using SAS Expanders.