Hitachi Virtual Storage Platform(VSP) Hardware Components
Hitachi VSP architecture is modular based architecture. Hitachi virtual storage platform(VSP) can have maximum of 6 racks with 2 DKC and 16 DKU units.
Here first 3 racks are one module and other 3 are 2nd module and module is handled by each DKC. These 2 modules are connected by Grid Switch.
What is DKC (Disk Controller)?
DKC is the brain of VSP. All operations of VSP are handled by DKC (disk controller). It provides system logic, control, memory, and monitoring, as well as the interfaces and connections to the disk drives and the host servers.
The main components of DKC are
- Front-end directors (FED)
- Back-end directors (BED)
- Virtual storage directors (VSD)
- Cache memory adapters
- Grid Switch
- Service processor(SVP)
- Power supplies, cooling fans etc.
On the front side of the DKC, we see VSDs and Cache adapters and backside FEDs, BEDs, service processor and Grid switch. The components in DKC are placed in 2 cluster structure.
Let’s discuss about each component in detail.
Front-end directors (FED)
Front-end directors/channel adapters (CHAs) provide connection to hosts/servers. It handles the data transfer between hosts and cache adapters. VSP can have maximum of 8 front-end directors (channel adapters) per chassis if 4 back-end directors are installed and 12 Front end directors if no back-end directors are installed. It supports both Open systems and main frame connectivity.
Both FC and FICON options are available.
Each FED has 16 Front-end ports (FC or FICON) with speed of 8 Gb/s.
A VSP Front End Director (CHA) accepts and responds to host requests by directing the host I/O requests to the VSD owned by respective LDEV. The virtual storage director processes the commands, manages the metadata in Control Memory, and creates jobs for the Data Accelerator processors in FEDs and BEDs. DAPs then process the data. The VSD which owned an LDEV tells the FED where to read or write the data in cache.
Hitachi VSP supports maximum 192 ports.
Back-end directors (BED)
The Back-end directors/Disk Adapters provides physical connections to disk drives and SSDs which are in DKUs.
The Back‐end Director (BED) boards execute all I/O jobs received from the processor boards and control all reading or writing to disks. BEDs are responsible for transferring the data between cache and disks.
VSP supports upto 4 Back-end directors contain 16 SAS ports each with speed of 6 Gb/s.
Hitachi VSP supports maximum 32 SAS ports (Backend ports).
BED functions include the following:
- execute jobs received from a VSD board
- use DMA to move data in or out of data cache
- creates RAID‐5 and RAID‐6 parity with an embedded XOR processor
- encrypt data on disk (if desired)
- manage all reads or writes to the attached disks
The BED contains DRR (Data Recover and Reconstruct), it is a parity generator circuit. It handles about RAID parity creation in case of any hard drive failure.
Virtual storage directors (VSD)
The Virtual Storage Director (VSD) is the I/O processing board. There are either two or four VSDs installed per DKC. The addition of VSD boards increases the internal I/O power of the array. This is separate from the amount of host ports or disk paths. Each board contains a Xeon quad‐core Xeon CPU and 4GB of local RAM. These boards execute all of the internal I/O jobs dispatched from the FEDs and BEDs. It is important to note that user data is never passed into or out of the VSD board. The VSDs only execute I/O requests (“jobs”) and manage the Control Memory tables.
The VSD communicates over the grid with the Data Accelerator processors on each FED or BED board; it does not directly communicate with hosts or disks.
Each VSD board is allocated certain LDEVs to manage. Assigned VSD manages the I/O for an LDEV. When LDEVs are created VSDs are chosen in round‐robin method.
If a VSD board is fails, all of its assigned LDEVs are temporarily reassigned to the other VSD board. The assisting VSD will read all of the necessary Control Memory information from the master maintained in the cache system. Once failed VSD board is replaced the reverse will occur, and the original LDEV management will revert to the new board.
VSD responsibilities are
- Manages all host requests from FED
- Manages requests of FED used in virtualization mode. FED are in Virtualization mode if it is connected to External mode.
- Manages the movement of data between cache and Disks (staging & de-staging).
Manages Remote replication process
Cache memory adapters/Data Cache Adapters
The Data Cache Adapter (DCA) boards are the memory boards that contains all user data and the master copy of Control Memory (metadata). There are up to 8 DCAs installed per chassis, with 8GB to 32GB of cache per board (32GB to 256GB per chassis) The first two DCA boards in the base chassis (but not in the expansion chassis) have a region of up to 48GB (24GB per board) used for the master copy of Control Memory. Each DCA board has a 500MB region reserved for a Cache Directory. This has a mapping table to manage pointers from LDEVs and allocated cache slots to those LDEVs in that cache board.
Each DCA board also has one or two on‐board SSD drives (31.5GB each) for use in backing up the entire memory space in the event of an array shutdown. If the full 32GB of RAMM is installed on a DCA, it must have two 31.5GB SSDs installed. On‐board batteries power each DCA board long enough to complete several such shutdown operations back‐to‐back in the event of repeated power failures before the batteries have had a chance to charge back up.
VSP supports maximum of 1TB cache.
The Control Memory region contains the following types of data:
- information about the array configuration details
- details regarding the Parity Groups, VDEVs and LDEVs
- tables used to manage external I/O and HUR Copy operations
- Dynamic Provision and Dynamic Tiering control information
The Grid Switches (GSWs) provide the high speed cross‐connections among the FED, BED, Cache adapter and VSD boards. There can be two or four GSWs installed per DKC. Each GSW has 24 high speed ports, where each port supports a full‐duplex rate (send plus receive) of 2048MB/s.
We can have maximum of 4 grid switches per controller (DKC) with total 96 ports and provides direct high redundant high performance physical connections to each and every FED, BED and Cache adapter which forms Hi-star E network.
Grid switch/express switches are the core of HiStar-E Network architecture.
For every I/O request, FED or BED user data blocks directly go to the Cache Memory Adapter boards, while all FED or BED metadata and job control traffic goes to the Virtual Storage Director boards.
Service Processor is a PC that implements array configuration settings and monitors the array operational status. It is a blade server that runs the application for performing hardware and software maintenance operations in VSP.
The functions of service processor are
- It provides human interface to storage system.
- Health checks for storage can be done through Service processor
- Collecting reports
- Performance monitoring
The SVP allows the engineers to set and modify the system configuration information, and also can be used for checking the system status. The SVP can be configured to report system status and errors to Service Center and enables the remote maintenance of the storage system
Power supplies, cooling fans
Power supplies provide power to the controller chassis in a redundant configuration to prevent system failure. Up to 4 ps modules can be used as needed to provide power to additional components.
Each fan unit contains 2 fans to ensure adequate cooling in case one of the fans fails.
What is DKU (Disk unit)?
In simple words DKU (Disk unit) contains disks. Disks may be FC drives, SSDs or Flash drives.
And the other components in DKU are
- SAS switch – for communicating with disks
- Cooling fans – To reduce the heat generated in DKU
- Power supplies – Provides power to DKU
All components are configured in redundant pairs to prevent system failure. All the components can be added, removed, or replaced while the storage system is in operation.
VSP supports two types of DKUs (Disk units)
- A DKU that contains all LFF drives (3.5”). Maximum of 80 drives can be placed in this DKU.
- A DKU that contains all SFF drives (2.5”). Maximum of 128 drives can be placed in this DKU.
A DKU can have either LFF or SFF drives but not mixed. But a VSP can have mixed HDUs. Single rack can have 3 DKUs.