How to install and configure cluster in Linux

Installation and Configuration of Cluster with Two Nodes in Linux

Installation of cluster with two nodes in Linux is explained in this article.

For this setup we have taken three machines and the details as shown below.
Server 192.168.5.111
Hostname server
node1 192.168.5.112
Hostname node1
node2 192.168.5.113
Hostname node2

To install and configure clustering on Linux, we need to install below packages in all three machines.

  • Mod_cluster (modcluster-0.16.2-29.el6.x86_64.rpm)
  • Luci (luci-0.26.0-63.el6.centos.x86_64.rpm)
  • CCS (ccs-0.16.2-75.el6_6.2.x86_64.rpm)
  • Clusterlib (clusterlib-3.0.12.1-68.el6.x86_64.rpm)
  • Ricci (ricci-0.16.2-75.el6.x86_64.rpm)
  • CMAN(cman-3.0.12.1-68.el6.x86_64.rpm)


To Install Cluster in Linux

Install ricci package on three machines using yum package manager.

[root@server ~]# yum install ricci -y
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirror.nbrc.ac.in
 * extras: mirror.nbrc.ac.in
 * updates: mirror.nbrc.ac.in
Resolving Dependencies
-->  Running transaction check
--->  Package ricci.x86_64 0:0.16.2-86.el6 will be installed
-->  Processing Dependency: modcluster for package: ricci-0.16.2-86.el6.x86_64
-->  Running transaction check
--->  Package modcluster.x86_64 0:0.16.2-35.el6 will be installed
.
.
.
Installed:
  ricci.x86_64 0:0.16.2-86.el6

Dependency Installed:
  clusterlib.x86_64 0:3.0.12.1-78.el6     corosync.x86_64 0:1.4.7-5.el6
  corosynclib.x86_64 0:1.4.7-5.el6        libibverbs.x86_64 0:1.1.8-4.el6
  librdmacm.x86_64 0:1.0.21-0.el6         modcluster.x86_64 0:0.16.2-35.el6
Complete!

Install luci using the following command.

[root@node1 ~]# yum install luci -y
Loaded plugins: aliases, changelog, fastestmirror, kabi, presto, refresh-
              : packagekit, security, tmprepo, verify, versionlock
Loading support for CentOS kernel ABI
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: centos.webwerks.com
 * epel: epel.mirror.net.in
 * extras: centos.webwerks.com
 * updates: mirror.nbrc.ac.in
Resolving Dependencies
-->  Running transaction check
.
.
.
Updated:
  luci.x86_64 0:0.26.0-78.el6.centos
Complete!

Run the following command to install ccs package in all the three machines.

[root@server ~]# yum install ccs -y
Loaded plugins: aliases, changelog, fastestmirror, kabi, presto, refresh-
              : packagekit, security, tmprepo, verify, versionlock
Loading support for CentOS kernel ABI
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirror.fibergrid.in
 * epel: epel.mirror.net.in
 * extras: mirror.fibergrid.in
 * updates: mirror.nbrc.ac.in
Resolving Dependencies
-->  Running transaction check
.
.
.
Updated:
  ccs.x86_64 0:0.16.2-86.el6
Complete!

Install cman in all the three machines.

[root@server ~]# yum install cman
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirror.nbrc.ac.in
 * extras: mirror.nbrc.ac.in
 * updates: mirror.nbrc.ac.in
Resolving Dependencies
-->  Running transaction check
--->  Package cman.x86_64 0:3.0.12.1-78.el6 will be installed
-->  Processing Dependency: openais > = 1.1.1-1 for package: cman-3.0.12.1-78.el6.x86_64
.
.
.
Installed:
  cman.x86_64 0:3.0.12.1-78.el6

Dependency Installed:
  fence-agents.x86_64 0:4.0.15-12.el6       ipmitool.x86_64 0:1.8.15-2.el6
  net-snmp-utils.x86_64 1:5.5-57.el6        openais.x86_64 0:1.1.1-7.el6
  openaislib.x86_64 0:1.1.1-7.el6

Complete!

Use below command to confirm the packages are installed.

[root@server ~]# rpm -qa | egrep " ricci|luci|modc|cluster|ccs|cman" 
ricci-0.16.2-86.el6.x86_64
luci-0.26.0-78.el6.centos.x86_64
clusterlib-3.0.12.1-78.el6.x86_64
ccs-0.16.2-86.el6.x86_64
cman-3.0.12.1-78.el6.x86_64
modcluster-0.16.2-35.el6.x86_64


To Configure Cluster in Linux

First setup the cluster to start the ricci service on all three machines.

[root@server ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@node1 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@node2 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]

To create the cluster, use ccs commands or edit the “ cluster.conf” file to add the nodes and other configurations.

[root@server cluster]# cd /etc/cluster
[root@server cluster]# ls
cman-notify.d

Now enter passwords for ricci user.

[root@server cluster]# passwd ricci
Changing password for user ricci.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

Next enter the command as given below.

[root@server cluster]# ccs -h 192.168.5.111 --createcluster linuxhelp_com
192.168.5.111 password:

After entering the above command, cluster.conf file is created in /etc/cluster directory.

[root@server cluster]# ls -l
total 8
-rw-r-----. 1 root root  192 Jun 29 10:32 cluster.conf
drwxr-xr-x. 2 root root 4096 May 11 15:29 cman-notify.d

Default cluster.conf is shown below.

[root@server cluster]# nano cluster.conf
< ?xml version=" 1.0" ?> 
< cluster config_version=" 1"  name=" linuxhelp_com" > 
< fence_daemon/> 
< clusternodes/> 
< cman/> 
< fencedevices/> 
< rm> 
< failoverdomains/> 
< resources/> 
< /rm> 
< /cluster> 

Add the two nodes to the system.

[root@server cluster]# ccs -h 192.168.5.111 --addnode 192.168.5.112
Node 192.168.5.112 added.
[root@server cluster]# ccs -h 192.168.5.111 --addnode 192.168.5.113
Node 192.168.5.113 added.

After adding the node servers then the cluster.conf file looks like,

[root@server cluster]# nano cluster.conf
< ?xml version=" 1.0" ?> 
< cluster config_version=" 3"  name=" linuxhelp_com" > 
< fence_daemon/> 
< clusternodes> 
< clusternode name=" 192.168.5.112"  nodeid=" 1" /> 
< clusternode name=" 192.168.5.113"  nodeid=" 2" /> 
< /clusternodes> 
< cman/> 
< fencedevices/> 
< rm> 
< failoverdomains/> 
< resources/> 
< /rm> 
< /cluster> 

To verify node details

Enter the following command to verify the node.

[root@server cluster]# ccs -h 192.168.5.111 --lsnodes
192.168.5.112: nodeid=1
192.168.5.113: nodeid=2

Finally enter ccs &ndash help command to study the further details.
Tag : Cluster
FAQ
Q
Is the demo application available?
A
Yes, it's part of the mod-cluster download (under demo/client). The SessionDemo itself is not available, but it's a simple demo adding data to an HTTP session.
Q
Is this a direct competition to Terracotta's offering?
A
No; mod-cluster is about (1) dynamic discovery of workers, (2) web applications, and (3) intelligent load balancing. Clustering is an orthogonal aspect;
Q
Are there seperate logging mechanism for mod_cluster like we use to have for mod_jk
A
No; mod-cluster uses the normal httpd log, and this is configured in httpd.conf (similar to mod-jk / mod-proxy). On the JBoss AS side, the normal AS logging is used (e.g. conf/log4j.xml
Q
Is it possible to configure mod_cluster or mod_jk in a way that certain IPs requests go to just a particular domain
A
Not easily. One could configure virtual hosts in httpd.conf, and workers connect to certain virtual hosts only, but there is no enforcement of which domains are hit from the httpd side.
Q
Is mod_cluster delivered as a native module in Apache just as mod_proxy?
A
Yes, on the httpd side. On the JBoss AS side, we use a service archive (mod_cluster.sar), in /deploy