PCS Cluster is required for:

  • The role LB (Load Balancer), if two load balancers are used for redundancy
  • The role STORE (file storage), if two file stores are setup using DRBD for redundancy

If the system does not contain redundancy, do not install this. Only install on the above roles.

Install PCS Services (Both nodes)

Install the PCS packages:

Install packages
dnf config-manager --set-enabled HighAvailability
dnf -y install pacemaker pcs resource-agents fence-agents-all
systemctl enable pcsd.service
systemctl start pcsd.service

Disable Managed Services (Both nodes)

Disable smb since this will be managed by pacemaker:

Disable Managed Services
systemctl disable smb

Configuration Settings (Both nodes)

Next configure the names of the machines and the virtual IP address which will be shared in the cluster:

CAUTION PASSWORD

Setup hosts and create user and password
JT_HOST1=acd-lb1
JT_HOST2=acd-lb2
PASSWORD=<password>

Configure the Firewall (Both nodes)

Next configure the firewall for ha services:

Configure firewall
firewall-cmd --zone=public --add-service=high-availability --permanent
firewall-cmd --reload

Change user password (Both nodes)

Change the password of the hacluster user (replace <password> with the chosen password):

Setup hosts and create user and password
echo ${PASSWORD} | passwd --stdin hacluster

Cluster Configuration (Only on one node!)

Now configure the cluster and set some basic options - replace <password> with the chosen password:

Configure cluster
pcs host auth ${JT_HOST1} ${JT_HOST2} -u hacluster -p ${PASSWORD}
pcs cluster setup jtel_cluster ${JT_HOST1} ${JT_HOST2}
pcs cluster enable --all
pcs cluster start --all
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
pcs resource defaults migration-threshold=1

Test


Check the results on both machines:

Test
pcs status
 
# It might take a little time for the cluster to come online. Run the above command, until the cluster comes online on both nodes.
 
-->
 
Cluster name: jtel_cluster
Cluster Summary:
  * Stack: corosync
  * Current DC: acd-lb1 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
  * Last updated: Fri Oct  2 21:52:32 2020
  * Last change:  Fri Oct  2 21:52:25 2020 by hacluster via crmd on acd-lb1
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ acd-lb1 acd-lb2 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
  • No labels