HDS definitions of notations

Hitachi storage definitions of notations

Before getting to explorer about Hitachi storage arrays, first allow us to get some Hitachi data systems definitions of terms.

  • Array Group (Disk feature) – The term used to describe a set of four physical disk disks installed as a set in four HDU containers (horizontal slice) within a single DKU box. When a set of one or two Array Groups (four or eight disk disks) is formatted using a RAID level, the resulting formatted entity is called a Parity Group. Although technically the term Array Group refers to a group of bare physical disks, and the term Parity Group refers to something that has been formatted as a RAID level and therefore actually has parity data (here we consider a RAID‐10 mirror copy as parity data), be aware that this technical distinction is often lost and you will see the terms Parity Group and Array Group used interchangeably in the field.
  • BED (Back‐end Director board) ‐ the disk controller board that provides eight backend 6Gbps SAS links via four SAS 2W ports. A pair of boards (16 SAS links) is always installed as a feature.(DKA board)
  • Cache Directory – the 500MB region reserved on each DCA cache board for use in managing that DCA’s cache slot allocations to VSDs.
  • CHA (Channel Adapter) ‐ Hitachi’s name for the FED board.
  • Chassis (VSP CBXA or CBXB) – a set of 1‐3 three racks with a logic box and 1‐8 DKUs.
  • CMA (Cache Memory Adapter) ‐ Hitachi’s name for the DCA board.
  • Concatenated Parity Group ‐ A configuration where the VDEVs corresponding to a pair of RAID‐10 (2D+2D) or RAID‐5 (7D+1P) Parity Groups, or four RAID‐5 (7D+1P) Parity Groups, are interleaved on a RAID stripe level on round-robin basis. A logical RAID stripe row is created as a concatenation of the individual RAID stripe rows. This has the effect of dispersing I/O activity over twice or fourfold the amount of disks, but it doesn’t change the names, amount, or size of VDEVs, and hence it doesn’t make it possible to assign larger LDEVs to them. Note that we often refer to RAID‐10 (4D+4D), but this is actually two RAID‐10 (2D+2D) Parity Groups interleaved together. For a more comprehensive explanation see the Concatenated Parity Group section in the appendix.
  • Control Memory (CM) ‐ the region in each VSD’s local RAM and in the global cache that is used to manage LDEV metadata and system states. It replaces the use of a separate Shared Memory system in the USP V. In general, Control Memory contains all metadata in a storage system that is used to describe the physical configuration, track the state of all LUN data, track the status of all components, and manage all control tables that are used for I/O operations (including those of Copy Products). The size of Control Memory in cache can grow up to 48GB.
  • Control Rack –- Control Rack contains the logic box and logic boards for a chassis. A dual chassis array has two of these.
  • DCA :- It is called as Data Cache Adapter (cache board), with 4 PCIe 4‐lane ports, 8 DIMM slots in 4 banks of RAM using DDR-3‐800 DIMMs, and one (or) two SSDs for local backup. (CMA or Cache Memory Adapter board)
  • DKC – Hitachi’s name for the Control Rack or the logic box (i.e. DKC‐0, DKC‐1) within that control rack.
  • DKA – (Disk Adapter) Hitachi’s name for the back end director(BED).
  • DKU (Disk Unit) – the 128 (2.5” disks) or 80 (3.5” disks) disk container that mounts in a rack. There can be two DKUs in a control frame (a DKC) and three in each expansion (DKU) rack. Each DKU contains eight HDUs. Each DKU is split vertically across two power domains, with four HDUs in each domain. The SFF DKU is called an SBX while the LFF DKU is called as UBX.
  • DP‐VOL – (Dynamic Provisioning Volume) ‐ the Virtual Volume assigned to an HDP Pool. Some documents refer to this as a V‐VOL. It is a member of a V‐VOL Group, which is a kind of VDEV. Each DPVOL has a user specified size between 8GB and 60TB in increments of one block (512 byte sector) and is built upon 42MB pages of physical storage.
  • Drive (disk) – an SSD device, a SAS disk, or a SATA disk.
  • eLUN (external LUN) – an external LUN Is one which is located in another storage array that is attached via two or more ePorts on a VSP and accessed by the host through other VSP FED ports.
  • ePort (External Port) – A FED port operated as an external port rather than a target An external array connects via two or more VSP Fibre Channel FED ports on the VSP instead of to a host. The FED ports used in this manner are changed from a host target into an initiator port (or external Port) by use of the Universal Volume Manager (UVM) Software product. The VSP will ‘discover’ any exported LUNs on each ePort. The eLUN is used within the VSP as a VDEV, a logical container from which LDEVs can be carved. Individual external LDEVs may be mapped to a portion of or to the entirety of the eLUN. Usually a single external LDEV is mapped to the exact LBA range of the eLUN, and thus the eLUN can be “passed through” the VSP to the host.
  • Feature (package) – an installable hardware option (such as FED, BED, CDA, VSD, GSW, or disks) that includes two PCBs (one located on each VSP power domain or “cluster”) or four disks.
  • FED – (Front‐end Director) ‐ the Open Fibre Channel or FICON interface board used to attach hosts to the array. The Open FC boards may also be used to attach local external storage, or to remote arrays using the Hitachi Universal Replicator product.
  • GSW :- It is called as Grid Switch, the HiStar‐E board (24 ports, PCIe 4‐lane, full duplex 1GB/s send and 1GB/s receive). It is to interconnect the FEDs, BEDs, DCAs, and VSDs. (ESW).
  • HDU (Hard Disk Unit) ‐ the disk container within the DKU box. There are eight HDUs per DKU, with half of the HDUs (10 or 16 disks each) presented on the front of the DKU and half on the rear side. Each SFF (2.5”) HDU holds 16 disks, while each LFF (3.5”) HDU contains 10 disks. An entire DKU will either be SFF or LFF, but the two types of DKUs may be intermixed in a rack.
  • HiStar‐E – the name of the new PCI Express 4‐lane grid switch (GSW) interconnection among the FED, BED, DCA, and VSD boards.
  • LDEV (Logical Device) – A logical volume internal to the array that can be used to contain customer data. LDEVs are uniquely identified within the array using a six hex digit identifier in the form LDKC:CU:LDEV. LDEVs are carved from a VDEV (see VDEV), and thus there are four types of LDEVs ‐ internal LDEVs, external LDEVs, COW V‐VOLs, and DP‐VOLs. LDEVs may be mapped to a host as a LUN, either as a single LDEV, or as a set of up to 36 LDEVs combined in the form of a LUSE. Note: what is called an LDEV in HDS enterprise arrays like the VSP is called an LU or LUN in HDS modular arrays like the Adaptable Modular Storage (AMS) family.
  • LUN (Logical Unit Number) – the host‐visible identifier assigned by the user to an LDEV to make it usable on a host port. An internal LUN has no actual queue depth limit (but 32 is a good rule of thumb) while an external (virtualized) eLUN has a Queue Depth limit of 2‐128 (adjustable) per eLUN per external port path to that eLUN. In Fibre Channel, the host (possibly virtual) HBA Fibre Channel port is the initiator, and the array’s virtual Fibre Channel port or Host Storage Domain is the target. Thus the Logical Unit Number is the number of the logical volume within the target.
  • LUSE (Logical Unit Size Expansion) – A concatenation (“spill‐and‐fill”) of 2 to 36 LDEVs (up to a 60TB limit) that is then presented to a host as a single LUN. A LUSE will normally carry out at the level of just any such LDEVs (Parity Group). You may see the term “HDEV” or Head LDEV. A LUSE is identified by the LDEV name of the first or “head” LDEV in the LUSE. The HDEV is the thing that can be mapped to the host as a LUN, and it is either a single LDEV or 2 to 36 LDEVs comprising a LUSE. LDEVs that are interior to a LUSE (not the head LDEV) may not be mapped to the host as a LUN. HDEVs may have multiple host paths, meaning the HDEV may be mapped as a LUN within multiple different Host Storage Domains on the same or different ports.
  • MP (microprocessor) – the Intel Xeon quad‐core 2.33GHz CPU used on the VSD board, or the NEC Y‐wing 800MHz CPU on a USP V FED or BED board.
  • Parity Group (RAID Group) – a set of one or two Array Groups (a set of 4 or 8 disks) formatted as a single RAID level, either as RAID‐10 (often referred to as RAID‐1 in HDS documentation), RAID‐5, or RAID 6. The VSP’s Parity Group types are RAID‐10 (2D+2D), RAID‐5 (3D+1P), RAID‐10 (4D+4D), RAID‐5 (7D+1P), and RAID‐6 (6D+2P). Internal LDEVs are carved from the VDEV(s) corresponding to the formatted space in a Parity Group, and thus the maximum size of an internal LDEV is determined by the size of the VDEV that it is carved from. The maximum size of an internal VDEV is approximately 2.99 binary TB. If the formatted space in a Parity Group is bigger than 2.99 TB, then multiple VDEVs must be created on that Parity Group. Note that there actually is no discrete 4+4 Parity Group type ‐ see Concatenated Parity Group.
  • PCIe (PCI Express) – a bus connection technology that supports 1‐lane, 4‐lane, and 8‐lane configurations. The VSP uses the 4‐lane and 8‐lane versions. The 4‐lane is capable of 1GB/s send plus 1GB/s receive in full duplex mode (i.e. concurrently driven in each direction). The 8‐lane is double that. On PCIe 1.0, each “lane” is capable of up to 2.5 Gbps simultaneously in each direction. This is 250 MB/s of data and commands in each direction (full duplex mode). Note that 10 bits are used to encode an 8‐bit byte as it is transferred over the link.
  • PDEV (Physical DEVice) – a physical internal disk.
  • RAID‐1 – used to describe what is usually called “RAID‐10”, a stripe of mirror pairs. Thus when we say “RAID‐1” in the context of a Hitachi enterprise VSP‐family array, we mean the same thing as when we say “RAID‐10” in the context of an AMS modular array.
  • Shared Memory ‐ the two or four memory boards in the USP V design that contain the Control Memory regions, accessible by each FED or BED using dedicated serial links to these boards. All four boards needed to be installed in order to enable all of the Shared Memory paths on each FED or BED board.
  • SVP (Service Processor) ‐ a blade PC running Windows XP that is installed in the control rack and used for system configuration and reporting.
  • VDEV – The logical storage container from which LDEVs are carved. There are four types of VDEVs:
    • Internal VDEV (2.99 binary TB max): maps to the formatted space within a parity group that is available to store user data. LDEVs carved from a parity group VDEV are called internal LDEVs.
    • External storage VDEV (2.99 TB max): maps to a LUN on an external (virtualized) array. LDEVs carved from external VDEVs are called external LDEVs.
    • Copy on Write (CoW) VDEV (2.99 TB max): called a “V‐VOL group”, and LDEVs carved from a CoW V‐VOL group are called a CoW V‐VOLs
    • Dynamic Provisioning (DP) VDEV (60TB max): called a “V‐VOL group”, and LDEVs carved from a DP V‐VOL group are called a DP‐VOLs (Dynamic Provisioning Volumes).
  • VSD (Virtual Storage Director) ‐ the processor board (2‐8 per array) that contains an Intel quadcore Xeon and local RAM for its core workspace and partial copy of Control Memory.
  • V‐VOL Group –either a Dynamic Provisioning VDEV or a Copy on Write VDEV. With Dynamic Provisioning, it is used to one DP‐Vol. V‐VOL Groups are used to parallel the internal architectural structures of internal storage.

You may also like...

Please write your query here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: