Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Sv translation
languageen

PCS Cluster is required for:

  • The role LB (Load Balancer), if two load balancers are used for redundancy
  • The role STORE (file storage), if two file stores are setup using DRBD for redundancy

If the system does not contain redundancy, do not install this. Only install on the above roles.

Install PCS Services (Both nodes)

Install the PCS packages and stop the services.

Translations Ignore


Code Block
apt-get update
apt-get -y install pacemaker corosync pcs haveged

systemctl stop pcsd
systemctl stop pacemaker
systemctl stop corosync


Disable Managed Services (Both nodes)

Disable smb since this will be managed by pacemaker:

Translations Ignore


Code Block
systemctl disable smbd


Configuration Settings (Both nodes)

Next configure the names of the machines and the virtual IP address which will be shared in the cluster:

Status
colourRed
titleCaution Password

Translations Ignore


Code Block
JT_HOST1=acd-lb1
JT_HOST2=acd-lb2
PASSWORD=<password>


Configure the Firewall (Both nodes)

Next configure the firewall for ha services:

Translations Ignore


Code Block
ufw allow 2224/tcp
ufw allow 3121/tcp
ufw allow 5403/tcp
ufw allow 5404/udp
ufw allow 5405/udp



Change user password (Both nodes)

Change the password of the hacluster user (replace <password> with the chosen password):

Translations Ignore


Code Block
echo hacluster:${PASSWORD} | chpasswd


Cluster Configuration

Node 1 - Create Cluster Key

Create a key for the cluster and copy to server 2:

Translations Ignore


Code Block
# On Server 1
corosync-keygen
scp /etc/corosync/authkey jtel@acd-lb2:/home/jtel/


Node 2 - Move Cluster Key

Move the cluster key to the configuration directory and setup rights:

Translations Ignore


Code Block
# On Server 2
mv /home/jtel/authkey /etc/corosync/
chown root:root /etc/corosync/authkey
chmod 400 /etc/corosync/authkey


Both Nodes - Create Corosync Configuration

Warning

Note: the hosts file must be configured for this to work.


Translations Ignore


Code Block
mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.orig
cat << EOFF > /etc/corosync/corosync.conf
totem {
  version: 2
  cluster_name: jtel_cluster
  transport: knet
  crypto_cipher: aes256
  crypto_hash: sha256
  token: 4000
}

nodelist {
  node {
    ring0_addr: acd-lb1
    name: acd-lb1
    nodeid: 1
  }

  node {
    ring0_addr: acd-lb2
    name: acd-lb2
    nodeid: 2
  }
}

quorum {
  provider: corosync_votequorum
  two_node: 1
}

logging {
  to_logfile: yes
  logfile: /var/log/corosync/corosync.log
  to_syslog: yes
  timestamp: on
}
EOFF


Start Cluster - Both Nodes

Translations Ignore


Code Block
systemctl enable corosync
systemctl enable pacemaker
systemctl enable pcsd

systemctl start corosync
systemctl start pacemaker
systemctl start pcsd


Resource Cleanup - One Node

Translations Ignore


Code Block
pcs resource cleanup
pcs status


Check if the output is OK.

Configure Cluster - One Node

Translations Ignore


Code Block
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
pcs resource defaults migration-threshold=1


Test


Check the results on both machines:

Translations Ignore


Code Block
root@test-lb1:/home/jtel# pcs status
Cluster name: jtel_cluster
Stack: corosync
Current DC: acd-lb1 (version 2.0.1-9e909a5bdd) - partition with quorum
Last updated: Tue Feb 23 07:49:26 2021
Last change: Tue Feb 23 07:40:58 2021 by root via cibadmin on acd-lb1

2 nodes configured
0 resources configured

Online: [ acd-lb1 acd-lb2 ]

No resources


Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled



Translations Ignore


Code Block
root@acd-store1-test:/home/jtel# pcs config
Cluster Name: jtel_cluster
Corosync Nodes:
 acd-store1-test acd-store2-test
Pacemaker Nodes:
 acd-store1-test acd-store2-test

Resources:

Stonith Devices:
Fencing Levels:

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
  Meta Attrs: rsc_defaults-meta_attributes
    migration-threshold=1
Operations Defaults:
  No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: debian
 dc-version: 2.0.5-ba59be7122
 have-watchdog: false
 no-quorum-policy: ignore
 stonith-enabled: false

Tags:
 No tags defined

Quorum:
  Options:



Sv translation
languagede

Status
colourRed
titleThis page is only available in English

Sv translation
languagefr

PCS Cluster est nécessaire pour :

  • Le rôle LB (Load Balancer), si deux équilibreurs de charge sont utilisés pour la redondance
  • Le rôle STORE (stockage de fichiers), si deux magasins de fichiers sont configurés en utilisant DRBD pour la redondance

Si le système ne contient pas de redondance, ne l'installez pas. Installez uniquement sur les rôles ci-dessus.

Installer les services PCS (les deux nœuds)

Installez les packages PCS et arrêtez les services.

Translations Ignore


Code Block
apt-get mise à jour apt-get -y installer le pacemaker corosync PCs ontged systemctl stop pcsd systemctl stop pacemaker systemctl stop corosync


Désactiver les services gérés (les deux nœuds)

Désactivez smb puisque cela sera géré par le pacemaker :

Translations Ignore


Code Block
systemctl disable smbd


Paramètres de configuration (les deux nœuds)

Configurez ensuite les noms des machines et l'adresse IP virtuelle qui sera partagée dans le cluster :

Status
colourRed
titleMot de passe de précaution

Translations Ignore


Code Block
JT_HOST1=acd-lb1 JT_HOST2=acd-lb2 PASSWORD=<password>


Configurer le pare-feu (les deux nœuds)

Ensuite, configurez le pare-feu pour les services ha :

Translations Ignore


Code Block
ufw allow 2224/tcp ufw allow 3121/tcp ufw allow 5403/tcp ufw allow 5404/udp ufw allow 5405/udp 



Changer le mot de passe utilisateur (les deux nœuds)

Changez le mot de passe de l'utilisateur du hacluster (remplacez <mot de passe> par le mot de passe choisi) :

Translations Ignore


Code Block
echo hacluster:${PASSWORD} | chpasswd


Configuration du cluster

Nœud 1 - Créer une clé de cluster

Créez une clé pour le cluster et copiez-la sur le serveur 2:

Translations Ignore


Code Block
# On Server 1 corosync-keygen scp /etc/corosync/authkey jtel@acd-lb2:/home/jtel/ 


Nœud 2 - Déplacer la clé de cluster

Déplacez la clé de cluster vers le répertoire de configuration et les droits d'installation:

Translations Ignore


Code Block
# On Server 2 mv /home/jtel/authkey /etc/corosync/ chown root:root /etc/corosync/authkey chmod 400 /etc/corosync/authkey 


Les deux nœuds - Créer une configuration Corosync

Warning

Remarque: le fichier hosts doit être configuré pour que cela fonctionne.


Translations Ignore


Code Block
mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.orig cat << EOFF > /etc/corosync/corosync.conf totem { version: 2 cluster_name: jtel_cluster transport: knet crypto_cipher: aes256 crypto_hash: sha256 token: 4000 } nodelist { node { ring0_addr: acd-lb1 name: acd-lb1 nodeid: 1 } node { ring0_addr: acd-lb2 name: acd-lb2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/corosync/corosync.log to_syslog: yes timestamp: on } EOFF 


Démarrer le cluster - Les deux nœuds

Translations Ignore


Code Block
systemctl start corosync systemctl start pacemaker systemctl start pcsd 


Configurer le cluster - Un nœud

Translations Ignore


Code Block
pcs property set stonith-enabled=false pcs property set no-quorum-policy=ignore pcs resource defaults migration-threshold=1 


Test


Vérifiez les résultats sur les deux machines :

Translations Ignore


Code Block
root@test-lb1:/home/jtel# pcs status Cluster name: jtel_cluster Stack: corosync Current DC: acd-lb1 (version 2.0.1-9e909a5bdd) - partition with quorum Last updated: Mar 23 Fév 07:49:26 2021 Dernière modification: Tue Feb 23 07:40:58 2021 by root via cibadmin on acd-lb1 2 nodes configured 0 resources configured Online: [ acd-lb1 acd-lb2 ] No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled