VHACS

The Linux SCSI Target Wiki

Jump to: navigation, search
VHACS
Logo
Storage Cloud Controller
Original author(s) Jerome Martin
Developer(s) Datera, Inc.
Initial release June 26, 2008 (2008-06-26)
Preview release 0.8.15 / August 25, 2008;
6 years ago
 (2008-08-25)
Development status Deprecated
Written in Python
Operating system Linux
Type Cloud controller
License GNU GPLv2
Website datera.io
VHACS-VM x86_64 across two physical nodes on two open platforms (Linux x86_64 and Linux i386) with eight active 5GB and 2.5GB storage clouds.
VHACS four physical node "bare-metal" x86_64 VHACS with four 5GB clouds running with server/client enabled.

VHACS (Virtualization, High Availibility and Cluster Storage, pronounced vee-hacks) is a highly available cloud storage implementation, running on Linux v2.6. VHACS is a combination of at least 8 long term OSS/Linux based projects, along with a CLI management interface for controlling VHACS nodes, clouds, and vservers within the VHACS cluster.

VHACS implements a M+N (Active+Spares) model that provides:

The easiest way to try out VHACS and to get an idea of the how the admin-level interface works, is to use one of the available VHACS-VM Alpha images. Using two VM images for testing out the VHACS cloud will initially be the easiest way to try things out.

Contents

Technologies

VHACS roles are assigned to nodes in the VHACS cluster. Only one or zero roles can be assigned to each node in the VHACS cluster. These roles are defined as:

The underlying technologies are as follows.

Operating system

Prototype platform Debian Etch v4 on x86_64 with v2.6.22.16 kdb or 2.6.22-4-vserver kernels.

Cluster

Server

Client

Fabric support

VHACS uses iSCSI on the server side of the cloud, so any client with an iSCSI Initiator can take advantage of the VHACS server side cloud. As work continues on Linux-IO Target, other fabrics and/or storage devices will become available for the VHACS cloud, too, such as FCoE, Fibre Channel, etc.

Test and validation

In the 2 node cluster configuration running on multi-socket single core x86_64, running 32 active VHACS clouds (both client and server) of 1 GB and 100 MB sizes in the current test bed. The latter is used for multi-cloud ops, e.g.: vhacs storage -S yourVHACScloud01-4 would put those 4 into STANDBY.

Limitations

In order to scale to the number of cluster RA's requires to monitor 32 cloud clusters, we decided to convert VHACS v0.6.0 from Heartbeat to OpenAIS. As of 6/26/2008, almost all major functionality is now up and running with OpenAIS+Pacemaker.

We are also exporting DRBD's struct block_device directly via Target/IBLOCK, so this means that DRBD is mapped 1:1 between the iSCSI Target Name+TargetPortalGroupTag tuple. Using volumes on top of the DRBD block device and then exporting these from Target/IBLOCK is another option for increasing cloud density and reducing the total number of required kernel threads.

There are ~256 kernel threads for a 32 cloud cluster on a fully loaded node running both roles (see below). Also, there are 128 cluster RAs for this same multi-role fully loaded VHACS cluster node.

NIC allocation

The NICs in the VHACS v0.8.15 release are allocated as follows:

lio-drbd-ruler:~# cat /etc/vhacs.conf | grep IFNAME
# STORAGE_IFNAME network interface to use for accessing the storage network
STORAGE_IFNAME = "eth2"
# HEARTBEAT_IFNAME network interface to be used for cluster communications
HEARTBEAT_IFNAME = "eth2"

Also, the OpenAIS Totem broadcast address information is also defined at the top of /etc/ais/openais.conf:

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.0.0
                mcastaddr: 224.0.0.1
                mcastport: 5405
        }
}

For VHACS v1.0, there will be an additional IFNAME defined, REPLICATION_IFNAME for replication traffic using DRBD between VHACS nodes.

Different STORAGE_IFNAME and HEARTBEAT_IFNAME NICs interfaces are supported in the current version of VHACS. In a basic example, this consists of having 2 NICs on each node in the VHACS cluster. They should be running on a different local subnet or network range from each other. Also, in the current release of the STORAGE_IFNAME and HEARTBEAT_IFNAME values must be the same on both machines, using /dev/eth0 for STORAGE_IFNAME and /dev/eth1 for HEARTBEAT_IFNAME on both machines.

The current setup using 2 network bridges looks something like:

vhacs-node0: 192.168.0.*/eth0 via DHCP
vhacs-node1: 192.168.0.*/eth0 via DHCP
vhacs-node0: 10.10.0.15/eth1 via static IP
vhacs-node1: 10.10.0.20/eth1 via static IP

See also VHACS two-port example.

Prototype

The current running prototype looks as follows:

lio-drbd-ruler:~# vhacs cluster -M
__________________________________________________________________________________________________________________
|                      |                      |                      |                      |                      |
| NODE                 | HA STATUS            | FREE STORAGE         | STORAGE ROLE         | VHOST ROLE           |
|______________________|______________________|______________________|______________________|______________________|
|                      |                      |                      |                      |                      |
| (A)lio-drbd-viking   | online               | 44.36G/68.36G        | 0 exported           | 2 mounted            |
| (A)lio-drbd-sabbath  | online               | 68.36G/68.36G        | N/A                  | N/A                  |
| (A)lio-drbd-ruler    | online               | 50.53G/74.53G        | 16 exported          | 14 mounted           |
|______________________|______________________|______________________|______________________|______________________|
_________________________________________________________________________________________________________________
|                  |                  |                  |                  |                  |                  |
| STORAGE          | DRBD:0           | DRBD:1           | DRBD TARGET      | ISCSI MOUNT      | FREE SPACE       |
|__________________|__________________|__________________|__________________|__________________|__________________|
|                  |                  |                  |                  |                  |                  |
| (A)liocloud0     |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler|(S)lio-drbd-viking| 940M/1008M (98%) |
| (A)liocloud1     |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler|(S)lio-drbd-viking| 940M/1008M (98%) |
| (A)liocloud7     |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)liocloud8     |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)morecloud0    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)morecloud1    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)morecloud2    | (P)lio-drbd-ruler|(S)lio-drbd-viking| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)morecloud3    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 940M/1008M (98%) |
| (A)westcloud0    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)westcloud1    | (P)lio-drbd-ruler|(S)lio-drbd-viking| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)westcloud2    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)westcloud3    | (P)lio-drbd-ruler|(S)lio-drbd-viking| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)eastcloud0    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)eastcloud1    | (P)lio-drbd-ruler|(S)lio-drbd-viking| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)eastcloud2    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
| (A)eastcloud3    |(S)lio-drbd-viking| (P)lio-drbd-ruler| (S)lio-drbd-ruler| (S)lio-drbd-ruler| 1.9G/2.0G (98%)  |
|__________________|__________________|__________________|__________________|__________________|__________________|

Command line interface

The VHACS cofiguration command line interface (CLI) has four basic commands:

vhacs:

halfdome:~# vhacs 
usage: /usr/sbin/vhacs cluster|node|storage|vserver [options]
cluster: cluster-level admin functions in a vhacs cluster.
         Run /usr/sbin/vhacs cluster to get specific usage information.
node:    node-level admin functions in a vhacs cluster.
         Run /usr/sbin/vhacs node to get specific usage information.
storage: storage-level admin functions in a vhacs cluster.
         Run /usr/sbin/vhacs storage to get specific usage information.
vserver: vserver-level admin functions in a vhacs cluster.
         Run /usr/sbin/vhacs vserver to get specific usage information.

vhacs cluster:

halfdome:~# vhacs cluster
usage: 
  For full description, try :
  vhacs cluster -h|--help
  With all options, use -V LEVEL to increase verbosity
  vhacs cluster -c|--check
  vhacs cluster -I|--init [NODES]
  vhacs cluster -m|--monitor
  vhacs cluster -M|--monitor1
  vhacs cluster [NODES] -E|--exec COMMAND|-
  vhacs cluster [NODES] -P|--exec COMMAND|-
syntax for NODES argument:
  foobar        just the node named foobar
  foobar1,foobar2,foobar3
                run the subcommand recursively for all listed nodes
  foobar1-3     equivalent to foobar1,foobar2,foobar3
  foobar1-3,foobar5
                equivalent to foobar1,foobar2,foobar3,foobar5
  ALL           special node name that converts to the list of 
                all nodes in the hearbeat cluster local node is 
                in, if any

vhacs node:

halfdome:~# vhacs node
usage: 
  For full description, try :
  vhacs node -h|--help
  vhacs node -s|--setrole ROLES NODES
  vhacs node -d|--delrole ROLES NODES
  vhacs node -l|--list
  vhacs node -i|--info NODES
  vhacs node -S|--standby NODES
  vhacs node -A|--active NODES
syntax for NODES argument:
  foobar        just the node named foobar
  foobar1,foobar2,foobar3
                run the subcommand recursively for all listed nodes
  foobar1-3     equivalent to foobar1,foobar2,foobar3
  foobar1-3,foobar5
                equivalent to foobar1,foobar2,foobar3,foobar5
  ALL           special name that converts to the list of all nodes
syntax for ROLES argument:
  vhost        the node can mount remote storage and will run resources
               like virtual machines off it
  storage      the node can host physical disk partitions part
               of user-created storage
  vhost,storage
               the node can do both
  ALL          equivalent to vhost,storage

vhacs storage:

halfdome:~# vhacs storage
usage: 
  For full description, try :
  vhacs storage -h|--help
  With all options, use -V LEVEL to increase verbosity
  vhacs storage -c|--create STORAGES -s|--size SIZE [-n|--nodes DRBD_NODES]
  vhacs storage -D|--destroy STORAGES
  vhacs storage -u|--unfail STORAGES
  vhacs storage -r|--restart STORAGES
  vhacs storage -l|--list
  vhacs storage -L|--listbig
  vhacs storage -m|--monitor
  vhacs storage -M|--monitor1
  vhacs storage -i|--info STORAGES
  vhacs storage -S|--standby STORAGES
  vhacs storage -A|--active STORAGES
  vhacs storage -p|--prefers NODES STORAGES
syntax for STORAGES argument:
  foobar        just the storage named foobar
  foobar1,foobar2,foobar3
                run the subcommand recursively for all listed storages
  foobar1-3     equivalent to foobar1,foobar2,foobar3
  foobar1-3,foobar5
                equivalent to foobar1,foobar2,foobar3,foobar5
  ALL           special name that converts to the list of all storages
 
syntax for NODES argument:
  nodefoobar    migrate storage mount on node nodefoobar if possible and
                assign scores so that this is the preferred node in the 
                future for mounting storages.
  node1-2,node4 try to migrate storage on node1, then node2, then node4
                and assign scores so that the nodes will be preferred
                in that order for future migrations
syntax for DRBD_NODES argument:
  Same as above, but this is used when creating a storage to set your
  preferred nodes to be used for hosting the disk backend.
  The ALL keyword here has no special meaning.

vhacs vserver:

halfdome:~# vhacs vserver
Not implemented yet.

See also

External links

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense