Master/slave synchronization and clusters

Bitvise SSH server can be run in a master/slave mode which facilitates its use in a cluster or a large-scale deployment.

The scope of the master/slave feature is to automate synchronization of SSH server settings between SSH servers. It is intended for use in environments where administrators would like to apply settings changes on one server (the master), and have the changes automatically propagate to others (slaves). The master/slave feature does not interact with solutions for server monitoring or load balancing. If your deployment requires e.g. load balancing, you will need an external solution for that.

To cause some or all aspects of the SSH server's configuration to be automatically reproduced from a primary installation to one or more secondary installations, use the Instance type feature in Bitvise SSH Server Control Panel to configure the primary installation as the master. Then, configure secondary installations to run as slaves, and retrieve configuration changes from the master.

A typical cluster installation may wish a secondary server to appear identical to a primary server to its users. To achieve this, a slave would reproduce all aspects of the SSH server's configuration: settings, host keys, and password cache. The aspects of SSH server configuration that will be copied from the master are configured in Instance type settings for each slave installation.

Configuring master/slave synchronization

Master/slave synchronization is configured through Instance type settings in the Bitvise SSH Server Control Panel (top right corner of the Server tab). The following steps are required:
  1. On the master server:

    1. Set instance type to Master, and configure a password which slave SSH servers will be required to present in order to synchronize settings from the master. We highly recommend configuring a long, secure, randomly generated password as described on this page.

    2. Use the Manage host keys interface to export the public keys of all host keys used by the SSH server. Alternately, write down the master's employed host key fingerprints so that you can enter them manually into slave configuration.

  2. On slave servers:

    1. Set instance type to Slave.

    2. Import the master's host keys through the Host and fingerprints setting. Alternately, use Add Fp to add a master's host key fingerprint, without importing the key.

    3. Enter the master's network address and port, and set the synchronization password to match the one configured on the master.

    4. In the remaining slave settings, configure which aspects of SSH server settings to synchronize from the master. Host keys can be synchronized from the master only if this is permitted in master settings.

    5. If you enable Auto-manage trusted host keys, the slave server will automatically add to its "Host keys and fingerprints" setting any new host keys generated on the master, assuming they haven't yet been employed. If the host key is already employed when it is first seen by the slave, the slave will not be able to connect regardless of this setting, because it has no previous knowledge of the key.

If a cluster node fails...

If a slave goes down in a cluster, then either the master and/or any other slaves will remain up. There will be nodes to handle connections, and it will remain possible to administer SSH Server settings for the cluster through the master. If the slave that failed is brought back online, it will re-synchronize.

If the master goes down, a slave will not automatically become a master. The master needs to be brought back online. Otherwise, an administrator needs to reconfigure the nodes in the cluster so that a different server will serve as master. While the master is down, changing SSH Server settings for the cluster through the master will not be possible, but slaves will continue to operate according to the last settings they received from the master. When the master is brought back online, slaves will re-synchronize.

Upgrading servers in a master/slave configuration

In versions 8.xx and later, slave instances are able to automatically upgrade to the master's version. If the master downgrades, however, slaves will not downgrade.

In versions 7.xx and earlier, automatic updates were not yet supported. In these versions, slave instances must be an equivalent or newer version than the master in order to successfully synchronize.

Unattended slave installation

If you would like to script several SSH Server slave installations, so that they can be performed unattended, the first preparatory step is to use the graphical SSH Server Control Panel on an exemplary slave installation to configure settings for a typical slave. This includes a step to import master host keys. Once the settings are configured and saved, use the same interface to export instance type settings into a file. For example, we assume the file is named BvSshServerSlave.wit.

On slaves you want to script, the next step is to perform a normal unattended SSH Server installation, which can be done independent of instance type. This is described on the page Installing Bitvise SSH Server.

Once the SSH Server is installed, you can use the utility BssCfg, which can be found in the SSH Server installation directory, to import slave settings from the command line, as follows:

BssCfg instanceType importBin C:\Path\BvSshServerSlave.wit

This command needs to be run in an elevated, administrative Command Prompt or PowerShell session.

Once this completes, the SSH Server is configured as a slave, and can be started.

Connections through a front-end or load balancer

In normal use, the SSH Server receives connections from the internet directly. In a cluster, connections may be forwarded to the SSH Server by a front-end or load balancer.

The SSH Server is best used either with a transparent front-end (which preserves IP addresses from clients) or with a non-transparent front-end that supports the PROXY protocol (to convey the client's IP at the start of the connection). If the PROXY protocol is used, it must be enabled in the SSH Server's Advanced settings, under Server > Bindings and UPnP, in the entry for the individual binding.

The SSH Server can also be used with a non-transparent front-end that does not support the PROXY protocol, but then all incoming connections will appear to arrive from the front end's IP instead of the actual clients'. This reduces the ability to audit incoming connections in log files, and may require automatic IP blocking in the SSH Server to be disabled.

Logging of connections from health monitors

A health monitor may be set up to connect to the SSH Server repeatedly to check if it's available. By default, this causes the SSH Server to log many trivial log entries corresponding to the health monitor's connections.

To avoid these trivial log entries, the IP addresses of health monitors may be configured in the SSH Server's Advanced settings, under Logging > Monitor IP whitelist. The connections will still be logged if they involve any SSH or FTPS protocol activity, but trivial connects and disconnects will be omitted.