Distributed deployment

Cluster installation or distributed deployment of Hypertable for Windows requires manual configuration. Consider the following different setups.

 

Using Windows Server DFS

If a DFS setup is used the master node runs a complete Hypertable instance (DFS broker, Hyperspace, Master, Range Server and optionally the Thrift broker) with all participating nodes running a range server, a DFS broker and optionally a standby Master, additional Thrift brokers can run on every node. Each DFS broker must be configured such that the root folders point to the same location, the config property DfsBroker.Local.Root must point to the replicated folder. Assuming that the master node runs a complete Hypertable instance and uses the default ports, a sample configuration for the participating nodes is given below:

# Hypertable Service
Hypertable.Service.DfsBroker=yes
Hypertable.Service.HyperspaceMaster=no
Hypertable.Service.HypertableMaster=no
Hypertable.Service.RangeServer=yes
Hypertable.Service.ThriftBroker=no

# Local Broker
DfsBroker.Local.Port=38030
DfsBroker.Local.Root=<path to replicated fs data dir>

# DFS Broker
DfsBroker.Host=localhost
DfsBroker.Port=38030

# Hyperspace
Hyperspace.Replica.Host=<master node hostname>
Hyperspace.Replica.Port=38040

# Hypertable RangeServer
Hypertable.RangeServer.Port=38060

Note that the location configured by DfsBroker.Local.Root must be accessible for the user for which the Hypertable service is running. Depending on the DFS configuration, the Hypertable service must most likely run under a domain user account.

It's recommended to configure the Hyperspace directory on a replicated folder, instead of running Hyperspace replication. The default Hyperspace data directory is located at %ProgramData%\Hypertable\hyperspace. The location can be changed by using the following configuration parameter:

Hyperspace.Replica.Dir=<path to replicated hyperspace data dir>

 

If running on top of a Windows Server DFS then the DFS broker overhead can be eliminated by the embedded file system, which results in better performance. The embedded filesystem can be enabled by using the following configuration parameters:

DfsBroker.Local.Embedded=yes
DfsBroker.Local.Embedded.AsyncIO=[yes|no]

 

The DFS broker can be completely disabled by using the following configuration parameter:

Hypertable.Service.DfsBroker=no

 

Not using a shared or distributed file system

Usually a single node runs a complete Hypertable instance (DFS broker, Hyperspace, Master, Range Server and optionally the Thrift broker) with all participating nodes running only range servers. Instead of sharing the file system a single, system wide DFS broker will be shared by all participating nodes running the range servers. If possible the configuration of all of the participating nodes can be shared by using a simple network share. Note that the network share must be accessible for the user for which the Hypertable service is running. Assuming that the master node runs a complete Hypertable instance and uses the default ports, a sample configuration for the participating nodes is given below:

# Hypertable Service
Hypertable.Service.DfsBroker=no
Hypertable.Service.HyperspaceMaster=no
Hypertable.Service.HypertableMaster=no
Hypertable.Service.RangeServer=yes
Hypertable.Service.ThriftBroker=no

# DFS Broker
DfsBroker.Host=<master node hostname>
DfsBroker.Port=38030

# Hyperspace
Hyperspace.Replica.Host=<master node hostname>
Hyperspace.Replica.Port=38040

# Hypertable RangeServer
Hypertable.RangeServer.Port=38060
Hypertable.RangeServer.ProxyName=*

 

Configure Range Server proxy name

Each participating range server requires a unique location id which must be configured using

Hypertable.RangeServer.ProxyName=*

this creates an unique location id based on host name and port, or

Hypertable.RangeServer.ProxyName=<unique rs proxy name>

 

Collaborate with a heterogeneous cluster

Hypertable for Windows collaborates with Hypertable servers built for other platforms in a heterogeneous cluster.