Erstellen des Stores mit LVM
Die meisten Linux / CentOS Installationen von uns arbeiten mit LVM. Diese Anleitung beschreibt, wie die Rolle STORE auf ein System mittels LVM installiert werden kann.
Schritt 1 - Feststellen der aktuellen config
# Freier Platz anzeigen
df -h
# Partitionen anzeigen
fdisk -l
# Festplatten anzeigen
ls /dev/sd*
# Physische Volumes anzeigen die durch LVM verwaltet werden
lvm pvs
# Logische Volumes anzeigen die durch LVM verwaltet werden
lvm lvs
# Logische Volume Gruppen anzeigen die durch LVM verwaltet werden
lvm vgs
# Wo ist was gemounted
mount |
Die Informationen oben werden nun gebraucht. Die Konfiguration sollte geprüft werden, nicht das bereits ein LVM für das Storage vorgesehen ist.
Schritt 2 - Anlegen einer neuen Partition
Hier ist es erforderlich, dass man weiß, wo der zusätzlicher Platz zu finden ist. Es gibt 2 Varianten - entweder eine neue Platte, oder eine erweiterte Platte.
In beiden Fällen, wird eine neue Partition angelegt. Hier in diesen Beispiel, wurde eine neue Platte genutzt, die auf /dev/sdb zu finden ist.
Die Befehle unten sowie die Partitionsnummer (bei einer vorhandenen Platte ist die Partition dann nicht mehr zwingend 1), entsprechend anpassen.
fdisk /dev/sdb
# --> Bearbeiten der Partitionen auf /dev/sda
n
# --> Neue Partition anlegen
p
# --> Neue primary partition
1
# --> Neue Partition 1 anlegen (Ausgabe bei fdisk -1 oben betrachten)
Enter
# --> Bestätigung dass der erste verfügbarer Zylinder verwendet werden soll
Enter
# --> Bestätigung dass der letzte verfügbarer Zylinder verwendet werden soll (ergibt in Summe die maximale Größe)
t
# --> Typ der Partition ändern
8e
# --> Linux LVM
w
# --> Wenn OK, schreiben
reboot now |
Schritt 3 - Hereinnahme in LVM - Device Erzeugen
# Hier die Ausgabe vorher von /dev/sd* --> das hier ist die neue Platte (die 1. Partition auf /dev/sdb, sprich die zweite Festplatte, neu angelegte Partition)
lvm pvcreate /dev/sdb1 |
Schritt 4 - Volume Group anlegen
lvm vgcreate "vg_jtelshared" /dev/sdb1 |
Schritt 5 - Logical Volume anlegen
lvm lvcreate -l +100%FREE vg_jtelshared -n lv_jtelshared |
Schritt 6 - File System erzeugen
mkfs.xfs -L data /dev/vg_jtelshared/lv_jtelshared |
Schritt 7 - Mount Point vorbereiten
mkdir /srv/jtel
mkdir /srv/jtel/shared |
Schritt 8 - Mount Point in fstab festlegen
vi /etc/fstab
...
(folgende Zeile hinzufügen)
/dev/mapper/vg_jtelshared-lv_jtelshared /srv/jtel/shared xfs defaults 0 0 |
Schritt 9 - Endergebnis prüfen
Es sollte ein Eintrag für /srv/jtel/shared existieren mit entsprechend freier Platz.
Create DRBD Partitions on disk (Both Nodes)
The commands below assume that /dev/sdb will be used for the DRBD partition.
device=/dev/sdb
dd if=/dev/zero of=${device} obs=512 count=100
dd if=/dev/zero of=${device} obs=512 count=100 seek=$(( $(blockdev --getsz ${device}) -100 ))
parted ${device} "mklabel gpt"
parted ${device} "mkpart primary 0% 100%" |
Verify that the partition is created:
fdisk -l /dev/sdb
-->
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdb: 274.9 GB, 274877906944 bytes, 536870912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: E7FF3D92-84BB-44E1-B0B0-26150DB80639
# Start End Size Type Name
1 2048 536868863 256G Microsoft basic primary |
Install DRBD Repos (Both Nodes)
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm |
Install DRBD Modules (Both Nodes)
yum -y install drbd84-utils kmod-drbd84 |
Configure Firewall (Both Nodes)
firewall-cmd --zone=public --add-port=7788-7799/tcp --permanent
firewall-cmd --reload |
Configure DRBD (Both Nodes)
NOTE: The following commands requires the hostname of both machines and the IP Address. These are obtained as follows:
Create a DRBD config file for jtelshared on /dev/sdb
cat <<EOFF > /etc/drbd.d/jtelshared.res
resource jtelshared {
protocol C;
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
on acd-store1 {
disk /dev/sdb1;
address 10.4.8.71:7789;
}
on acd-store2 {
disk /dev/sdb1;
address 10.4.8.171:7789;
}
startup {
become-primary-on both;
}
}
EOFF |
Create Metadata and start (Both Nodes)
drbdadm create-md jtelshared
drbdadm up jtelshared |
Make one node primary (First Node)
drbdadm primary jtelshared --force |
Tune the transfer (Second Node)
drbdadm disk-options --c-plan-ahead=0 --resync-rate=110M jtelshared |
Create filesystem (Primary Node)
mkfs.xfs -L data /dev/drbd1 |
Create fstab entry for file system (both nodes)
Add the following line to /etc/fstab
/dev/drbd/by-res/jtelshared/0 /srv/jtel/shared xfs noauto,noatime,nodiratime 0 0 |
Mount the file system (primary node)
Add the following line to /etc/fstab
mkdir /srv/jtel
mkdir /srv/jtel/shared
chown -R jtel:jtel /srv/jtel
mount /srv/jtel/shared |
Wait for initial sync to complete
cat /proc/drbd
-->
# When not yet done:
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----
ns:0 nr:3955712 dw:3950592 dr:0 al:8 bm:0 lo:5 pe:0 ua:5 ap:0 ep:1 wo:f oos:264474588
[>....................] sync'ed: 1.5% (258272/262132)M
finish: 2:08:08 speed: 34,388 (25,652) want: 112,640 K/sec
-->
# When done:
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
1: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:15626582 dw:284051762 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 |
Untune the transfer (Second Node)
drbdadm adjust jtelshared |
Make second node primary and mount the file system (Secondary node)
Add the following line to /etc/fstab
mkdir /srv/jtel
mkdir /srv/jtel/shared
chown -R jtel:jtel /srv/jtel
drbdadm primary jtelshared |
Install Samba and lsof (Both Nodes)
yum -y install samba samba-client lsof |
Configure Samba (Both Nodes)
cat <<EOFF > /etc/samba/smb.conf
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
printing = cups
printcap name = cups
load printers = yes
cups options = raw
min protocol = NT1
ntlm auth = yes
[homes]
comment = Home Directories
valid users = %S, %D%w%S
browseable = No
read only = No
inherit acls = Yes
[printers]
comment = All Printers
path = /var/tmp
printable = Yes
create mask = 0600
browseable = No
[print$]
comment = Printer Drivers
path = /var/lib/samba/drivers
write list = root
create mask = 0664
directory mask = 0775
[shared]
comment = jtel ACD Shared Directory
read only = no
public = yes
writable = yes
locking = yes
path = /srv/jtel/shared
guest ok = yes
create mask = 0644
directory mask = 0755
force user = jtel
force group = jtel
acl allow execute always = True
EOFF
sed -i -e "s/MYGROUP/WORKGROUP/g" /etc/samba/smb.conf |
Setup SeLinux, jtel User access and Firewall for Samba (Both Nodes)
setsebool -P samba_enable_home_dirs=on samba_export_all_rw=on use_samba_home_dirs=on use_nfs_home_dirs=on
printf 'fireball\nfireball\n' | smbpasswd -a -s jtel
firewall-cmd --zone=public --add-port=445/tcp --add-port=139/tcp --add-port=138/udp --add-port=137/udp --permanent
firewall-cmd --reload |
If necessary, add further users to samba:
useradd -m Administrator
printf 'F1r3B²11\nF1r3B²11\n' | smbpasswd -a -s Administrator
|
Test SAMBA (Both Nodes)
This test should be performed on the node which currently has /srv/jtel/shared mounted:
mount /srv/jtel/shared
service nmb start
service smb start
# Now check access to the SMB share via (for example) one of the windows machines.
service smb stop
service nmb stop
umount /srv/jtel/shared
# do same again on other node |
Unmount (Both Nodes), disable SAMBA
service smb stop
service nmb stop
umount /srv/jtel/shared
systemctl disable smb |
Install PCS Services (Both Nodes)
See Redundancy - Installing PCS Cluster.
Setup virtual IP (One Node Only!)
Change the following to set the virtual IP which should be shared between the nodes.
Configure PCS Resources (One Node Only!)
Configure the PCS resources with the following commands:
pcs resource create ClusterDataJTELSharedMount ocf:heartbeat:Filesystem device="/dev/drbd/by-res/jtelshared/0" directory="/srv/jtel/shared" fstype="xfs" --group=jtel_portal_group
pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=${KE_VIP} cidr_netmask=32 op monitor interval=30s --group=jtel_portal_group
pcs resource create samba systemd:smb op monitor interval=30s --group=jtel_portal_group
pcs constraint order start ClusterDataJTELSharedMount then ClusterIP
pcs constraint order start ClusterIP then samba |
Test
Test as follows:
pcs status
--> shows the status of the newly created resources on both nodes, one node should be active.
Cluster name: portal
Stack: corosync
Current DC: uk-acd-store2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Mon Mar 19 15:40:24 2018
Last change: Mon Mar 19 15:40:16 2018 by root via cibadmin on uk-acd-store1
2 nodes configured
3 resources configured
Online: [ uk-acd-store1 uk-acd-store2 ]
Full list of resources:
Resource Group: jtel_portal_group
ClusterDataJTELSharedMount (ocf::heartbeat:Filesystem): Started uk-acd-store1
ClusterIP (ocf::heartbeat:IPaddr2): Started uk-acd-store1
samba (systemd:smb): Started uk-acd-store1
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled |
Test the file mount:
# From the windows machines:
dir \\uk-acd-store\shared |
Test manual failover:
# Failover to node 2
pcs cluster standby uk-acd-store1
# ... (wait)
pcs status
# Then test the availability of the files from the windows machines.
# Create a new file before failing back (to make sure DRBD working ok).
# Fail back to node 1
pcs cluster unstandby uk-acd-store1
pcs cluster standby uk-acd-store2
# ... (wait)
pcs status
# Then test the availability of the files from the windows machines.
# Check that the new file created above is available.
# Unstandby node 2
pcs cluster unstandby uk-acd-store2 |
Manually link /home/jtel/shared
ln -s /srv/jtel/shared /home/jtel/shared |