InterWorx Cluster Installation Requirements

It is important to note that the Cluster Panel is essentially the InterWorx Control Panel in “cluster mode” - i.e. with clustering setup. Therefore we encourage you to look at the OS and VPS requirements found in this page. in order to understand what is expected. This chapter will expand on those requirements. If the concept of clustering is foriegn to you, or you don’t really understand what clustering means in the context of InterWorx, we encourage you to read our clustering guides found on the documentation sections of our website. Our clustering guides also cover the configurations that we support in more depth.

Server Requirements

Clustering essentially involves spreading the workload of specific services across multiple servers to mitigate the amount of work a given set of hardware is forced to process. It does this by load-balancing incoming connections destined for specific services to different machines in your cluster. Therefore, you obviously need at minimum 2 servers in order to benefit from the clustering system.

The Cluster Manager

One of the servers in your cluster will be designated the cluster manager. This will be the point of entry for incoming connections to your server and where the load balancer will reside. The cluster manager is also tasked with the responsibility of maintaining synchronization of it’s child cluster nodes such that, when say you add a new SiteWorx account, that account data is populated to the nodes.

Minimum Responsibilities of a Cluster Manager

The load balancer will be handling incoming connections and routing them to the child nodes for processing. This is typically a very non-processor/memory intensive process but it is important to note that if you move all the other responsibilities of a typical webserver to other nodes, the cluster manager will still at minimum be acting as a load-balancing router.
In addition, the InterWorx MySQL database (note: this is not the same as the MySQL database that your siteworx accounts utilize) is used to maintain synchronization across the cluster. Email, FTP, SiteWorx account meta data is stored on the InterWorx panel database residing on the cluster manager. This is also a responsbility that cannot be easily moved to another server. Lastly, the database stores the state of the command queue - i.e. the queue of actions the nodes need to perform in order to maintain synchronization with the cluster manager.
Thus, if load balancing and running the InterWorx database are the only things that your cluster manager are doing, then the CPU and Memory requirements are probably going to be quite reasonable.

Typical Responsibilities of a Cluster Manager

The thing is, in addition to load-balancing and running the InterWorx database, cluster managers are typically doing a lot more.
  • They are typically also acting as a node and handling part of the incoming connction workload. They handle HTTP/S requests much like the other nodes do.
  • The storage of SiteWorx user data - i.e. webpages and email - is by default set to the Cluster Manager and shared to the nodes via NFS. This means that the disks backing the SiteWorx user data are typically extremely busy as they are responsible for handling read/write requests from the CM itself and also from the nodes via NFS.
  • The SiteWorx databases are by default using the localhost MySQL server of which it’s files reside in /var/lib/mysql. The CPU usage of a MySQL server plus the disk access can also be taxing on the server’s hardware.
  • The control panel and webmail is typically mostly accessed via the cluster manager. Luckily the control panel doesn’t really utilize as much CPU/memory to run and the control-panel is one of the more modestly used services on a given cluster. On the other hand many users prefer to use webmail over using an IMAP or POP3 client and roundcube, arguably the nicest webmail client of the 3 InterWorx ships with, is known to be highly resource hungry when many clients are concurrently using it.

Spreading computation responsibility to other servers

There are many ways to mitigate the cluster manager being crushed by high demand.
  • Set the load-balancer to route all connections away from the cluster manager. This means send all HTTP/S, FTP, SMTP, IMAP, POP3 connections to the nodes so they are handled from there. That way the cluster manager’s webserver, ftp server, mail server are not using CPU/memory handling those requests. Using the load-balancer is covered more extensively in our clustering documentation, found on our documentation site.
  • InterWorx Cluster Panel supports moving the /home partition where SiteWorx user data is stored (actually, it’s typically in /chroot/home/ and /home is a symlink to /chroot/home) to a storage appliance or file server elsewhere and then shared with the cluster via NFS. That way your user data can be stored on a high-performance RAID device with say, RAID 5 for data protection and the cluster manager doesn’t have to use local disk to service disk requests to the SiteWorx user data.
  • For MySQL, InterWorx supports setting up standalone MySQL servers and giving InterWorx the root MySQL password in order to create users and databases for SiteWorx accounts on it. These are called “remote MySQL servers” in InterWorx-speak. Essentially InterWorx will have a SiteWorx account use a remote MySQL server instead of the localhost, offloading that site’s MySQL workload to a server who’s sole responsbility is servicing MySQL. In addition, that server’s MySQL data files can be backed by a fast, redundant RAID as well.
  • It’s a bit difficult to mitigate use of the control panel via the cluster manager. If clients use http(s)://domain.com/webmail - their connection to the webmail service will first be load balanced since they are making their connection via port 80 or 443 - both of which are services/ports that InterWorx can load balance. On the other hand, InterWorx does not load balance connections to it’s panel web service on port 2080 and port 2443. That means clients connecting to NodeWorx or SiteWorx will be connecting direct to the cluster manager.

The Nodes

The cluster nodes are much simpler to describe. They recieve connections from the load balancer on the Cluster Manager as if they came direct from the external client making the initial request. The service listening on the port for which the connection is destined handles and processes the connection and replies direct to the client. As such, most of the load is network-bound (and subsequently disk bound on the device storing SiteWorx user data) since it accesses the SiteWorx user data via NFS.
PHP applications can still yield high demand for CPU and memory, though, in the event that the code is required to process large data sets or the code is running very CPU-intensive algorithms. An example of this would be a PHP application that needs to sort an array with a million entries everytime a certain page is loaded - if you are having 100 GET requests per second for this page your CPU/memory demands will be quite high. While you could always mitigate this by improving the design of the application - often you are acting only as the host and you are unable to modify client code.
In any case, disk-use locally on the nodes is typically much lighter than it is on the cluster manager or the devices which are hosting the SiteWorx user data and MySQL server.

Resource Requirements

The resource requirements we have are quite modest and are comparable to what we require for a single server - a modern-ish 2005 or later processor with maybe 2 cores or hyperthreading and around 1-2GB of RAM and 10-20GB of disk space. This is of course a very low “minimum requirements” that is really “What is the minimum I need to invest in a machine to get InterWorx and CentOS running”. The answer is relatively little.
But if you intend to built a high-performance cluster or a cluster that is going to service a high-workload - you would be silly to not use the multi-core high-speed processors, high-speed memory , and the fastest disks out there to ensure your clients never see any slow downs. On the other hand the nice thing about a cluster is you can build a small cluster initially with modest systems and later if necessary buy a super-powerful node and route most traffic to it if you find that your cluster is hurting. Ideally we’d recommend you pick the resources you would dedicate if you were to build a single-server setup for each device.
Obviously the nodes can make do with less disk space, and the cluster manager can make do with less disk space if the responsibility of storing SiteWorx user data and the MySQL database is offloaded to other machines.

Network Requirements

The InterWorx Cluster panel can operate in 3 different network topologies - The cluster manager and nodes all sitting on the public network, the cluster manager and nodes on both a public network and private network for intra-cluster communication, and the cluster manager and nodes sitting on a private network behind a NAT.

All servers on a public network

This is probably the easiest to understand, connections come into the cluster manager, get routed to the nodes on the public network, and then the nodes respond back on their public network connection. The requirments are:
  • The cluster manager needs 2 public IP’s at minimum in order to perform it’s duty. One of the IP’s will be where incoming connections will come in on to the services hosted on the cluster. The other will be designated the cluster’s quorum IP which is the main IP where all server-to-server communication occurs. No SiteWorx accounts can be placed on this IP. In order to understand more about the quorum consult our cluster documentation.
  • The nodes need just a single public IP.
  • All IP’s need to be within the same subnet
  • If you want to be able to host SSL sites that need their own dedicated IP or just want more than a single IP, you can add additional IP’s on the cluster manager.

Servers on both public and private network

If we were to recommened the best network topology for a cluster, it would be this solution. It allows for you to segment intra-cluster communication on the private LAN network and incoming/outgoing traffic on the public network. The requirements are similar to those in section this article, with some slight differences.
  • You don’t need 2 public IP addresses because the cluster manager quorum can be it’s private LAN IP.
  • On the other hand, all public IP’s still need to be within the same subnet.
  • You should have 2 NIC’s in each server - one to communicate on the public network and one to communicate on the private network.
  • Naturally, every server needs to be connected to private network with a private IP.
The additional benefit with this setup is that if you intend to have a segregated MySQL server or have SiteWorx user data on a dedicated storage appliance or file server - you can put these on only the private network and prevent direct access to these devices from outside the cluster. In addition, your outside traffic and inside traffic are segregated on different hardware and therefore aren’t competiting for bandwidth or packet priority.

All servers on private network behind NAT

This makes it possible to run a cluster without multiple public IP’s or exposing all your cluster servers to the internet. On the other hand, you are usually a slave to your NAT device in that it will determine the capabilities of your cluster. Most SOHO NAT devices do not permit one-to-one NAT from a public IP to a private IP so most private clusters behind NAT are limited to one public IP. This means that if you are hosting multiple sites on the cluster, none will be able to take advantage of SSL. Also, high traffic to your cluster may cripple your NAT device’s memory as it may completly expend the memory available for the translation table.
In any case, the requirements are that:
  • The cluster manager needs 2 private IP’s - one that will recieve traffic from the NAT, the other acting as quorum.
  • Each node needs just 1 private IP. When it sends packets out to clients, it spoofs the address of the cluster manager so the NAT should just correctly accept the packet and route it to the correct host on the outside.
  • The NAT needs to forward ports 20, 21, 22, 25, 53, 80, 110, 143, 443, 993, 995, 2080, 2443 to the non-quorum IP of your cluster manager. In addition, you may want to forward 3306 if you are having outsiders connect to your MySQL server.

Storage Options

As stated earilier in the Spreading Computation Responsibility here , you can split the SiteWorx user data and the MySQL server out from the cluster manager onto standalone servers or appliances. This is helpful in keeping the workload reasonable on the cluster manager reasonable on extremely high-throughput clusters.

Segregated Storage Requirements

The segregated storage solution, in order to integrate with InterWorx, must share it’s data partition on the cluster manager using NFS. This means the appliance or file server must support NFS, NFS quotas, and NFS file locking. This also means that the local filesystem backing the NFS share must support quotas and file locking. We recommend something common like EXT3 or EXT4.
In order to support quotas, InterWorx has developed a plugin that utilizes SSH to communicate with the storage server everytime a new SiteWorx account is created or deleted so that it can create identical unix users with identical UIDs and GIDs. This is neccessary to ensure quotas are working properly as quotas are set via normal quota utilities on the cluster manager. When using a standalone fileserver for SiteWorx user data, eventually a command to set a quota limit reaches the fileserver’s implementation of quotas as an instruction to set the quota on a specific GID. If that GID does not exist, the quota will not be set and thus the quota will not be enforced.
Therfore, if you need storage restrictions on your cluster, you either need a file server or appliance that supports:
  • InterWorx SSHing in and doing useradd/groupadd
  • A local filesystem that supports quotas and file locking
  • NFSv3 with locking and quota support
Or you should consider simply have the siteworx user data served from the cluster manager.
When you mount the NFS share as /chroot, InterWorx will automatically detect that /chroot is an NFS share during cluster setup and instruct the nodes to mount the same NFS share as /chroot instead of trying to mount /chroot from the cluster manager (which is the case when SiteWorx user data is served from the cluster manager). This means that in /etc/exports on your fileserver or storage appliance, all InterWorx cluster nodes need to be permitted to access the share. Not allowing the nodes to access the standalone fileserver’s share will cause cluster setup to fail!

Seperate MySQL Server

The nice thing about a segregate MySQL server is you can run whatever MySQL server you prefer on whatever OS you prefer - as long as InterWorx can access the MySQL root user of the server and issue commands via the PHP MySQL API, that server should suffice. It should be noted that InterWorx does not support clustered MySQL.

Additional VPS Considerations

We occasionally have hosts set up cluster managers and nodes within VPS containers, which is fine. Note that you cannot cluster with InterWorx VPS licenses. You must use a regular unlimited-domain license in order to cluster. The VPS requirements are similar to what is discussed in section here, with an additional caveat.
  • OpenVZ and Virtuozzo do not support kernel-level NFS by default.
This means if you are a host without access to the hypervisor of your virtuozzo/OpenVZ instance. VPS instance, you will probably run into issues. Kernel-level NFS is required in order for the CM to share the SiteWorx user data with the nodes. In addition, the nodes require kernel-level NFS to share apache logs back to the cluster manager. This is required to calculate statistics and data usage on the cluster manager.

Was this answer helpful?

 Print this Article

Also Read

How to Import Interworx Hosting Accounts?

Import a Single Account Import Through NodeWorx If you prefer to use the web interface, follow...

Using the CLI Utility

2.1 Interactive Usage Below you will find the list of available Controllers. Each...

How to Install InterWorx Control Panel?

Login to your Linux server as root, via SSH or Terminal. Download and run the installer: sh...

License Activation

Activating your license The next step after running the install script is to activate InterWorx...

What anti-spam, anti-virus, and firewall software does InterWorx implement?

InterWorx currently ships with the following: Spam Prevention: SpamAssassin...