IBM SAN Volume Controller

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

The IBM SAN Volume Controller (SVC) is a block storage virtualization appliance that belongs to the IBM System Storage product family. SVC implements an indirection, or "virtualization", layer in a Fibre Channel Storage Area Network (SAN).

Architecture

The IBM 2145 SAN Volume Controller (SVC) is an inline virtualization or "gateway" device. It logically sits between hosts and storage arrays, presenting itself to hosts as the storage provider (target) and presenting itself to storage arrays as one big host (initiator). SVC is physically attached to any available port in one or several SAN fabrics.

The virtualization approach allows for non-disruptive replacements of any part in the storage infrastructure, including the SVC devices themselves. It also aims at simplifying compatibility requirements in strongly heterogeneous server and storage landscapes. All advanced functions are therefore implemented in the virtualization layer, which allows switching storage array vendors without impact. Finally, spreading an SVC installation across two or more sites (stretched clustering) enables basic disaster protection paired with continuous availability.

SVC nodes are always clustered, with a minimum of 2 and a maximum of 8 nodes, and linear scalability. Each I/O group consists of 2 nodes. Each node is a 1U high rack-mounted appliance leveraging IBM System x server hardware, protected by redundant power supplies and an integrated 1U high uninterruptible power supply. Note that the DH8 model is a 2U high unit, with integrated battery backup. An integrated two-row display and five-button keyboard offer stand-alone configuration and monitoring options. Each node has four Fibre Channel ports and two or four 10/1 Gbit/s Ethernet ports used for FCoE, iSCSI and management. All Fibre Channel & FCoE ports on the SVC are both targets and initiators, and are also utilized for inter-cluster communication. This includes maintaining read/write cache integrity, sharing status information, and forwarding reads and writes.[note 1]

Write cache is protected by mirroring within a pair of SVC nodes, called I/O group. Virtualized resources (= storage volumes presented to hosts) are distributed across I/O groups to improve performance. Volumes can also be moved nondisruptively between I/O groups, e.g., when new node pairs are added or older technology is removed. Node pairs are always active, meaning both members accept simultaneous writes for each volume. In addition, all other cluster nodes accept and forward read and write requests which are internally handled by the appropriate I/O group. Path or board failures are compensated by non-disruptive failover within each I/O group. This requires multipath drivers such as IBM Subsystem Device Driver (SDD)[1] or standard MPIO drivers.

SVC is based on COMmodity PArts Storage System (Compass) architecture, developed at the IBM Almaden Research Center.[1] The majority of the software has been developed at the IBM Hursley Labs in the UK.

Terminology

  • Node - a single 1U or 2U machine.
SVC node models
Type-model Cache [GB] FC speed [Gb/s] iSCSI Speed [Gb/s] Based upon Announced
2145-4F2 4 2 n/a x335 2 June 2003
2145-8F2 8 2 1 x336 25 October 2005
2145-8F4 8 4 1 x336 23 May 2006
2145-8G4 8 4 1 x3550 22 May 2007
2145-8A4 8 4 1 x3250M2 28 October 2008
2145-CF8 24 8 1 x3550M2 20 October 2009
2145-CG8 24 8 1 (10 Gbit/s optional) x3550M3 9 May 2011
2145-DH8 32 8 & 16 1 (10 Gbit/s optional) x3650M4 6 May 2014
  • I/O group - a pair of nodes that duplicate each other's write commands
  • Cluster - a group of 1 to 4 I/O groups managed as a single entity.
    • Stretched cluster - a site protection configuration with 1 to 4 I/O groups, each stretched across two sites, plus a witness site
    • Cluster IP address - a single IP address of a cluster that provides administrative interfaces via (SSH and HTTPS)
    • Service IP address - an IP address used to service an individual node. Each node can have a service IP configured.
    • Configuration node - a single node that holds the cluster's configuration and has the assigned cluster IP address.
  • Master Console (or SSPC) - a management GUI for SVC until rel 5.1, based on WebSphere Application Server; not installed on any SVC node, but on a separate machine[1]
    • As of SVC rel 6.1, a Master Console (SSPC) is no longer used. Web based administration is done directly on the configuration node, using a HTML5 GUI.
  • Virtual Disk (VDisk) - a unit of storage presented to the host. The release 6 GUI refers to a VDisk as a Volume.
  • Managed Disk (MDisk) - a unit of storage (a LUN) from a real, external disk array, virtualized by the SVC. An MDisk is the base to create an image mode VDisk.
  • Managed Disk Group - (MDisk Group) a group of one or more Mdisks. The extents of the MDisks in an MDisk Group are the base to create a striped or sequential mode VDisk. The release 6 GUI refers to a Managed Disk Group as a Pool.
  • Extent - a discrete unit of storage; an MDisk is divided into extents; a VDisk is formed from set of extents.

Performance

Release 4.3 of the SVC held the Storage Performance Council (SPC) world record for SPC-1 performance benchmarks, returning nearly 275K (274,997.58) IOPS. There was no faster storage subsystem benchmarked by the SPC at that time (October 2008).[2] The SPC-2 benchmark also returned a world leading measurement of over 7 GB/s throughput.

Release 5.1 achieved new records with a 4 node and 6 node cluster benchmark with DS8700 as backed storage device. SVC broke its own record of 274,997.58 SPC-1 IOPS in March 2010, with 315,043.59 for the 4 node cluster and 380,489.30 with the 6 node cluster, records that stood until October 2011.

Release 6.2 of the SVC held the Storage Performance Council (SPC) world record for SPC-1 performance benchmarks, returning over 500K (520,043.99) IOPS (I/Os per second) using 8 SVC nodes and Storwize V7000 as the backend disk. There was no faster storage subsystem benchmarked by the SPC at that time (January 2012).[3] The full results and executive summaries can be reviewed at the SPC website referenced above.[note 2]

Release 7.x provides multiple enhancements including support for additional CPUs, cache and adapters. The streamlined cache operates at 100µs fall-through latency[4] and 60 µs cache-hit latency, enabling SVC as a front-end to IBM FlashSystem solid-state storage without significant performance penalty. See also: FlashSystem V840

Included Features (7.x)

Indirection or mapping from virtual LUN to physical LUN
Servers access SVC as if it were a storage controller. The SCSI LUNs they see represent virtual disks (volumes) allocated in SVC from a pool of storage made up from one or more managed disks (MDisks). A managed disk is simply a storage LUN provided by one of the storage controllers that SVC is virtualizing. The virtual capacity can be larger than the managed physical capacity, with a current maximum of 32 PB, depending on management granularity (extent size)
Data migration and pooling
SVC can move volumes from one capacity pool (MDisk group) to another whilst maintaining I/O access to the data. Write and read caching remain active. Pools can be shrunk or expanded by removing or adding hardware capacity, while maintaining I/O access to the data. Both features can be used for seamless hardware migration. Migration from an old SVC model to the most recent model is also seamless and implies no copying of data.
Importing and exporting existing LUNs via Image Mode
"Image mode" is a non-virtualized pass-through representation of an MDisk (managed LUN) that contains existing client data; such an MDisk can be seamlessly imported into or removed from an SVC cluster.
Fast-write cache
Writes from hosts are acknowledged once they have been committed into the SVC mirrored cache, but prior to being destaged to the underlying storage controllers. Data is protected by replication to the peer node in an I/O group (cluster node pair). Cache size is dependent on the SVC hardware model and installed options. Fast-write cache is especially useful to increase performance in midrange storage configurations.
Auto tiering (Easy Tier)
SVC automatically selects the best storage hardware for each chunk of data, according to its access patterns. Cache unfriendly "hot" data is dynamically moved to solid state drives SSD, whereas cache friendly "hot" and any "cold" data is moved to economic spinning disks. Easy Tier also monitors spindle-only workloads.
Solid state drive (SSD) capability
SVC can use any supported external SSD storage device or provide its own internal SSD slots, up to 32 per cluster. Easy Tiering is automatically active when mixing different media in hybrid capacity pools (Managed Disk groups).
Thin Provisioning
LUN capacity is only used when new data is written to a LUN. Data blocks equal zero are not physically allocated, unless previous data unequal zero exists. During import or during internal migrations, data blocks equal zero are discarded (Thick-to-thin migration).
Besides, thin provisioning is integrated in the FlashCopy features detailed below to provide space-efficient snapshots
Virtual Disk Mirroring
Provides the ability to maintain two redundant copies of a LUN, implicitly on different storage controllers
Site protection with Stretched Cluster
A geographically distributed, highly available clustered storage setup leveraging the virtual disk mirroring feature across datacenters within 300 km distance. Stretched Clusters can span 2, 3 or 4 datacenters (chain or ring topology, a 4-site cluster requiring 8 cluster nodes). Cluster consistency is ensured by a majority voting set.
From two storage devices in two datacenters, SVC presents one common logical instance. All application-oriented operations, like Snapshots or Resizing, are applied on the logical instance. Hardware-oriented operations like real-time compression or live migration are applied at the physical instance level.
Unlike in classical mirroring, logical LUNs are readable and writable on both sides (tandem) at the same time, removing the need for "failover", "role switch", or "site switch". The feature can be combined with Live Partition Mobility or VMotion to avoid any data transport during a metro-distance virtual server motion.
All SVC cluster nodes also have read/write access to storage hardware in the mirror location, removing the need for site-resynchronization in case of a simple node failure.
Enhanced Stretched Cluster
A functionality optimizing data paths within a metro- or geo-distance Stretched Cluster (see above), helpful when bandwidth between sites is scarce and cross-site traffic must be minimized. SVC will attempt to use the shortest path for reads and writes. For instance, cache write destaging to storage devices is always performed by the most nearby cache copy, unless its peer cache copy is down.
Stretched Cluster with Golden Copy (3-site DR)
A Stretched Cluster that maintains an additional synchronous or asynchronous data copy on an independent Stretched Cluster or SVC or Storwize device at geo distances. The Golden Copy is a disaster protection against metro-scale outages impacting the Stretched Cluster as a whole. It leverages the optional Metro or Global Mirror functionality.

Optional features

There are some optional features, separately licensed e.g. per TB:[1]

Real-Time Compression
This technology, invented by the acquired startup Storwize,[5] has been integrated in the SVC and other IBM storage systems. Originally implemented as real-time file compression, it has since been enhanced to also provide in-flight block compression. The efficiency is equal to "zip" LZW (Lempel–Ziv–Welch) with a very large dictionary. The temporal locality of the algorithm may also increase the read/write performance on adequate data patterns such as uncompressed databases stored on spinning disks.
Real-time compression can be combined with Easy Tiering, Thin Provisioning and Virtual Disk Mirroring.
FlashCopy (Snapshot)
This is used to create a disk snapshot for backup, or application testing of a single volume. Snapshots require only the "delta" capacity unless created with full-provisioned target volumes. FlashCopy comes in three flavours: Snapshot, Clone, Backup volume. All are based on optimized copy-on-write technology, and may or may not remain linked to their source volume.
One source volume can have up to 256 simultaneous targets. Targets can be made incremental, and cascaded tree like dependency structures can be constructed. Targets can be re-applied to their source or any other appropriate volume, also of different size (e.g. resetting any changes from a resize command).
Copy-on-write is based on a bitmap with a configurable grain size, as opposed to a journal.[1]
Metro Mirror - synchronous remote replication
This allows a remote disaster recovery site at a distance of up to about 300km[6]
Global Mirror - asynchronous remote replication
This allows a remote disaster recovery site at a distance of thousands of kilometres. Each Global Mirror relationship can be configured for high latency / low bandwidth or for high latency / high bandwidth connectivity, the latter allowing a consistent recovery point objective RPO below 1 sec.
Global Mirror over IP - remote replication over the Internet
uses SANslide technology integrated into the SVC firmware to send mirroring data traffic across a TCP/IP link, while maximizing the bandwidth efficiency of that link. This may result in a 100x data transfer acceleration over long distances.[7]

Other products running SVC code

On 7 October 2010, IBM announced the IBM Storwize V7000.[8] This uses the SAN Volume Controller code base with internal storage to provide a mid-level storage subsystem.[9] Since then, IBM has released the V5000, V3700 and V3500(China only). The V7000 was significantly enhanced on 6 May 2014, by the V7000 Gen2. Together, these make up the IBM Storwize family.

The Actifio Protection and Availability Storage (PAS) appliance includes some elements of SVC code to achieve wide interoperability.[10] The PAS platform spans backup, disaster recovery, and business continuity among other functions.

See also

Footnotes

  1. These ports must be zoned together.
  2. "Cache hit" or "bandwidth" performance numbers are usually much higher, e.g. "20 GBPS", but are relatively meaningless as they cannot be achieved in real-word scenarios.

References

  1. 1.0 1.1 1.2 1.3 1.4 "IBM System Storage SAN Volume Controller", IBM's Redbook SG24-6423-05, pp. 12.
  2. SVC Rel 4.3 SPC results
  3. SVC Rel 6.2 SPC results
  4. http://www.redbooks.ibm.com/abstracts/tips1137.html?Open
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. http://www.4bridgeworks.com/products/sanslide/
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.

External links