Devuan Cluster
Excuse me for my Grammatical English Errors:
This Guide is very long and not perfectly organized but with calm and patience perhaps you can appreciate the intention of this work
This Guide is here only to try inspire the all members of the Devuan Community to make Devuan Every Day Better than today, because my Knowledge are limited.
This cluster is not appoint to CyberSecurity , is only Try to Clustering with Devuan. This Cluster try to be a HA Cluster with Shared Replicant File System, inspired from a Concept, where ALL Nodes they are Master. This Cluster is not Perfect, but few Goals were reached:
a) Mariadb Cluster with 3 nodes (Galera Cluster)
b) Gluster Shared Files System with 3 Replicant
c) Support for QEMU/KVM Virtual Machines
d) Apache Server like a Cluster Resources in HA Cluster with a Virtual IP ( IP Floating ) for Administration with “WEBMIN” GUI MANAGER SERVER.
This Goals were NOT reached :
e) unfortunately , HA Virtual Machine (Qemu / KVM ) with virtual motion, is was not reached
f) unfortunately this method of Cluster is not automatized, it means the IT – System Admin, must in many details pay attention and Start or Restart services // mount or unmount Files Systems after boot.
Explain some Deatils :
g) with GlusterFS there is a way, in case that one of the nodes fail out, the replicant make a clone of each virtual disk (VM’s), and this method could to use, to turn on the same VM in another node almost inmediately (you must to do this manually), I will show this in the Next Steps
h) this cluster run with OpenRC, in my opinion (and Experience), OpenRC is stable, solid, and when OpenRC make something, make it Good.
i) not IpTables or UFW or FirewallD, not Selinux ( nothing with Security )
I am NOT a Expert in Administration, Networking , or Development, all content of this Guide don’t be must take like a inflexible true, all this content could be change, this Guide is here only to try inspire the Devuan Community , because my Knowledge are limited.
Hardware
3 x SFF pc , 3 x low profile Network Card 1 Gbit/s with 4 ports, 4 x Switch 1 Gbit/s (min. 5 ports) , 16 GB ram x SFF, CPU Intel CORE DUO 3,00 GHz, SSD 250 GB, Many Networks Cables. Router or Firewalls Hardware with a Local DNS to resolve name. ( OpenWRT for Example ) OpenWRT could be Installed in a SFF pc with a network card. Let run your Creativity. ( I will not talk about a Configuration for Firewall Hardware here )
Operating System
Devuan Daedalus 5.0
Packages :
Corosync // Pacemaker // crmsh // pssh (Parallel SSH) // Webmin // Qemu/KVM //
Virt-Manager (only on the side of ADMIN “client” ) // Apache // Gluster // MariaDB (GALERA CLUSTER). //
Networking:
Every one can decide how make the Networking Configuration for example, I did make it on this way :
5 Interfaces for different packages traffic Services ( maybe it is wrong ) but I try to have a clean and efficient packages traffics
Eth0 =
Adminnistration (SSH) and Eth0 Internet 22.220.200.0/24
Eth1 =
Corosync 11.11.0.0/16
Eth2 =
Gluster 22.22.0.0/16
Eth3 =
for Virtual Machines (recommend macvtap) could put on the same IP range like SSH and Internet or in another Network Segment.
Eth4 =
Galera 33.33.0.0/16
In your local DHCP server you must to configure pc 1, pc 2 , pc 3 with a static IP ADDRESS in relation with each different MAC ADDRESS and in your local DNS server you will configure a domain
( name what you want ) I take devucluster.any for ssh administration and internet ( apt install etc. etc .etc. or Lynx ( browser in your CLI “command Line interface” ) ;
then give a resolve name for each host :
for Example :
IP 22.220.200.70 host devu1 >> Domain > devucluster.any > host devu1
IP 22.220.200.71 host devu2 >> Domain > devucluster.any > host devu2
IP 22.220.200.72 host devu3 >> Domain > devucluster.any > host devu3
==============================================================================================================================================================
1) Login in your node1 >>
ssh root@devu1.devucluster.any -p 22
2) make Pair ssh Keys in you node1 >>
root@devu1:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:D6ftaBpL4DGK9PFycFPCN/xRz8iHfk/5x0+BS/LmuFU root@devu1
The key's randomart image is:
+---[RSA 3072]----+
| . |
| . . o = |
| o = . + + |
| + o o . . .|
| . o+o S..o + E |
|...o=+. * = =.o|
|. .o.oo . o = .=|
| o. o.o = .o|
| oo. .o.. .|
+----[SHA256]-----+
root@devu1:~#
3) send keys id to node2 and node3 continue >>>
3) send pubkey from node1 to node2
root@devu1:~# ssh-copy-id root@devu2.devucluster.any
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'devu2.devucluster.any (22.220.200.71)' can't be established.
RSA key fingerprint is SHA256:Lk9F2848nHbgVQPuXe7Bs119LZrxKV3oOxXbE6SkbRM.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@devu2.devucluster.any's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@devu2.devucluster.any'"
and check to make sure that only the key(s) you wanted were added.
4) send pubkey from node1 to node3
root@devu1:~# ssh-copy-id root@devu3.devucluster.any
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'devu3.devucluster.any (22.220.200.72)' can't be established.
RSA key fingerprint is SHA256:Gb+x6CTRwRxYHot5bzYwGz+0Ug9m6C53s80wcniC0x4.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@devu3.devucluster.any's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@devu3.devucluster.any'"
and check to make sure that only the key(s) you wanted were added.
###### AND NOW BEGIN THE PARTY CLI ( COMMAND LINE INTERFACE ) #######
5 ) open 3 tabs in your terminal ( if you want )
Tab 1° your are ready logged in node1 ( devu1.devucluster.any )
Tab 2° login to node2 ssh root@devu2.devucluster.any -p 22
3° login to node3 ssh root@devu3.devucluster.any -p 22
6) repeat step 2, 3 , 4 pair ssh key and send the keys to each other nodes, from node2 to node3 node1, and from node3 to node1 node2.
6a ) send to node3
6c ) make the same on node3
7) make a file .dd in all nodes ( this is for #pssh# parallel-ssh ) but please pay attention, it is differents for each node ( you decide the name of this file, would be better a very short name )
## >>>> node 1 >>>>
cd ~/ && nano .dd
root@devu2.devucluster.any:22
## >>> save and close
## >>>> node 2 >>>>
cd ~/ && nano .dd
root@devu1.devucluster.any:22
## >>> save and close
## >>>> node 3 >>>>
cd ~/ && nano .dd
root@devu1.devucluster.any:22
## >>> save and close
7b) in node2
7c) in node3
8) now install all .deb that your Cluster need, in each nodes,( node 1 , node 2 , node 3 ) and be patient…
!!!! Important !!!! before install packages edit and comment >> /etc/apt/sources.list and all 3 nodes >> for example
#deb cdrom:[Devuan GNU/Linux 5.0.1 daedalus amd64 - server 20230914]/ daedalus contrib main non-free non-free-firmware
8a) Example for each nodes
8b) continue to install >>>
apt update && apt install corosync pacemaker pcs crmsh pssh mariadb-server mariadb-client glusterfs-server glusterfs-client apache2 grub-firmware-qemu ipxe-qemu libnss-libvirt libqcow-utils libqcow1 libvirt-clients libvirt-clients-qemu libvirt-daemon libvirt-daemon-config-network libvirt-daemon-config-nwfilter libvirt-daemon-driver-lxc libvirt-daemon-driver-qemu libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-daemon-driver-storage-rbd libvirt-daemon-driver-storage-zfs libvirt-daemon-driver-vbox libvirt-daemon-driver-xen libvirt-daemon-system libvirt-daemon-system-sysv libvirt-login-shell libvirt-sanlock libvirt-wireshark libvirt0 qemu-block-extra qemu-efi qemu-efi-aarch64 qemu-efi-arm qemu-guest-agent qemu-system qemu-system-arm qemu-system-common qemu-system-data qemu-system-gui qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-sparc qemu-system-x86 qemu-system-xen qemu-user qemu-user-binfmt qemu-utils -y
8c) in node 1
8d) in node 2
8e) in node 3
9) and now from node1, reboot all your nodes
root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 13:35:00 [SUCCESS] root@devu2.devucluster.any:22
[2] 13:35:00 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# reboot
Broadcast message from root@devu1 (pts/0) (Tue Dec 19 13:35:10 2023):
The system is going down for reboot NOW!
root@devu1:~# Connection to devu1.devucluster.any closed by remote host.
Connection to devu1.devucluster.any closed.
10 ) login again in your node1 as root
ssh root@devu1.devucluster.any -p 22
11) now create a new edited /etc/hosts file in all your Cluster Nodes
root@devu1:~# ls -lsa /etc/hosts
4 -rw-r--r-- 1 root root 207 Dec 20 06:50 /etc/hosts
11a now delete first the /etc/hosts file and continue to create a new /etc/hosts file
root@devu1:~# rm /etc/hosts
11b) edit your new /etc/hosts file
root@devu1:~# nano /etc/hosts
12) check if all is ok
root@devu1:~# ls -lsa /etc/hosts
4 -rw-r--r-- 1 root root 831 Dec 20 08:07 /etc/hosts
root@devu1:~#
>>> and then edit…
nano /etc/hosts #####(edit with names and IP)#####
127.0.0.1 localhost
127.0.1.1 devu1.devucluster.any devu1
22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3
11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3
22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3glus2.gluscluster.glu glus2
22.22.22.4glus3.gluscluster.glu glus3
33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
#######################################################################################################################################################################
save and close.
13) Example on node1
14 ) first delete /etc/hosts file in node2 and node3 and then send new edited file /etc/hosts/ to node2 and node3 >>>>
root@devu1:~# parallel-ssh -i -h ~/.dd rm /etc/hosts
[1] 13:53:46 [SUCCESS] root@devu3.devucluster.any:22
[2] 13:53:46 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# scp /etc/hosts root@devu2.devucluster.any:/etc/
hosts 100% 867 1.4MB/s 00:00
root@devu1:~# scp /etc/hosts root@devu3.devucluster.any:/etc/
hosts 100% 867 1.2MB/s 00:00
root@devu1:~#
15 ) check out if all is ok ! >>>
!!! pay attention !!! there is differences between node 1, node2, node3
127.0.1.1 devu1.devucluster.any devu1
127.0.1.1 devu 2.devucluster.any devu2
127.0.1.1 devu 3.devucluster.any devu3
============================================================================================================================================================================
root@devu1:~# cat /etc/hosts && parallel-ssh -i -h ~/.dd cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 devu1.devucluster.any devu1
22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3
11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3
22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3
33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
[1] 13:55:53 [SUCCESS] root@devu2.devucluster.any:22
127.0.0.1 localhost
127.0.1.1 devu 2.devucluster.any devu1
22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3
11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3
22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3
33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
[2] 13:55:53 [SUCCESS] root@devu3.devucluster.any:22
127.0.0.1 localhost
127.0.1.1 devu 3.devucluster.any devu1
22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3
11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3
22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3
33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@devu1:~#
15a) reboot all your nodes
15b) login again in your node1 as root
ssh root@devu1.devucluster.any -p 22
16 ) you need administration in your Devuan Cluster and the same time Internet, “if you decide it”, let us “ Networking “ now in your Devaun Cluster >>>
!!! Important in the output mac address in this guide, they are Fake !!
check out of all interface are up
16a) root@devu1:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
root@devu1:~#
16a) delete and after create and edit >>>
16b) root@devu1:~# rm /etc/network/interfaces && parallel-ssh -i -h ~/.dd rm /etc/network/interfaces
[1] 17:15:52 [SUCCESS] root@devu2.devucluster.any:22
[2] 17:15:52 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# nano /etc/network/interfaces
root@devu1:~# parallel-ssh -i -h ~/.dd rm /etc/iproute2/rt_tables
[1] 18:38:54 [SUCCESS] root@devu3.devucluster.any:22
[2] 18:38:54 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~#
###########################################################################################################################################################################
16c) now edit your /etc/network/interfaces file >>
nano /etc/network/interfaces
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ########## #############################
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 22.220.200.70
netmask 255.255.255.0
gateway 22.220.200.1
broadcast 22.220.200.255
allow-hotplug eth1
iface eth1 inet static
address 11.11.11.2
netmask 255.255.0.0
#broadcast 11.11.255.255
#gateway 11.11.11.1
post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.2 table dvu
post-up ip route add default via 11.11.11.1 dev eth1 table dvu
post-up ip rule add from 11.11.11.2/32 table dvu
post-up ip rule add to 11.11.11.2/32 table dvu
allow-hotplug eth2
iface eth2 inet static
address 22.22.22.2
netmask 255.255.0.0
#broadcast 22.22.255.255
#gateway 22.22.22.1
post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.2 table dvv
post-up ip route add default via 22.22.22.1 dev eth2 table dvv
post-up ip rule add from 22.22.22.2/32 table dvv
post-up ip rule add to 22.22.22.2/32 table dvv
allow-hotplug eth4
iface eth4 inet static
address 33.33.33.2
netmask 255.255.0.0
#broadcast 33.33.255.255
#gateway 33.33.33.1
post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.2 table dva
post-up ip route add default via 33.33.33.1 dev eth4 table dva
post-up ip rule add from 33.33.33.2/32 table dva
post-up ip rule add to 33.33.33.2/32 table dva
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
save and close
16c ) now edit your /etc/iproute2/rt_tables file >>>> !!! remember you are still login in node1 !!!
16d) now send your /etc/iproute2/rt_table file and your /etc/network/interfaces to node2 node3
!!! Pay attention !!!important after you sent those files you must to modificate IP address in /etc/network/interfaces on node 2 and node 3 for each interface >> eth0 eth1 eth2 eth4 >> remember eht3 will be for VM in the next Steps
root@devu1:~# scp /etc/iproute2/rt_tables root@devu2.devucluster.any:/etc/iproute2
rt_tables 100% 109 160.1KB/s 00:00
root@devu1:~# scp /etc/iproute2/rt_tables root@devu3.devucluster.any:/etc/iproute2
rt_tables 100% 109 140.6KB/s 00:00
root@devu1:~# scp /etc/network/interfaces root@devu2.devucluster.any:/etc/network
interfaces 100% 1635 2.2MB/s 00:00
root@devu1:~# scp /etc/network/interfaces root@devu3.devucluster.any:/etc/network
interfaces 100% 1635 2.2MB/s 00:00
root@devu1:~#
17)now check out if all is ok, first >> login in node2
root@devu1:~# ssh devu2.devucluster.any
Linux devu2 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64
The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Dec 19 18:56:26 2023 from 22.220.200.70
root@devu2:~#
17a) Edit /etc/network/interfaces and change the IP address
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 22.220.200.71
netmask 255.255.255.0
gateway 22.220.200.1
broadcast 22.220.200.255
allow-hotplug eth1
iface eth1 inet static
address 11.11.11.3
netmask 255.255.0.0
#broadcast 11.11.255.255
#gateway 11.11.11.1
post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.3 table dvu
post-up ip route add default via 11.11.11.1 dev eth1 table dvu
post-up ip rule add from 11.11.11.3/32 table dvu
post-up ip rule add to 11.11.11.3/32 table dvu
allow-hotplug eth2
iface eth2 inet static
address 22.22.22.3
netmask 255.255.0.0
#broadcast 22.22.255.255
#gateway 22.22.22.1
post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.3 table dvv
post-up ip route add default via 22.22.22.1 dev eth2 table dvv
post-up ip rule add from 22.22.22.3/32 table dvv
post-up ip rule add to 22.22.22.3/32 table dvv
allow-hotplug eth4
iface eth4 inet static
address 33.33.33.3
netmask 255.255.0.0
#broadcast 33.33.255.255
#gateway 33.33.33.1
post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.3 table dva
post-up ip route add default via 33.33.33.1 dev eth4 table dva
post-up ip rule add from 33.33.33.3/32 table dva
post-up ip rule add to 33.33.33.3/32 table dva
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
save and close and return to node1
root@devu2:~# exit
17b) now check out if all is ok >>> now login in node 3
root@devu1:~# ssh devu3.devucluster.any
Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64
The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Dec 19 11:35:38 2023 from 22.220.200.2
root@devu3:~#
17c) Edit /etc/network/interfaces and change the IP address
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 22.220.200.72
netmask 255.255.255.0
gateway 22.220.200.1
broadcast 22.220.200.255
allow-hotplug eth1
iface eth1 inet static
address 11.11.11.4
netmask 255.255.0.0
#broadcast 11.11.255.255
#gateway 11.11.11.1
post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.4 table dvu
post-up ip route add default via 11.11.11.1 dev eth1 table dvu
post-up ip rule add from 11.11.11.4/32 table dvu
post-up ip rule add to 11.11.11.4/32 table dvu
allow-hotplug eth2
iface eth2 inet static
address 22.22.22.4
netmask 255.255.0.0
#broadcast 22.22.255.255
#gateway 22.22.22.1
post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.4 table dvv
post-up ip route add default via 22.22.22.1 dev eth2 table dvv
post-up ip rule add from 22.22.22.4/32 table dvv
post-up ip rule add to 22.22.22.4/32 table dvv
allow-hotplug eth4
iface eth4 inet static
address 33.33.33.4
netmask 255.255.0.0
#broadcast 33.33.255.255
#gateway 33.33.33.1
post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.4 table dva
post-up ip route add default via 33.33.33.1 dev eth4 table dva
post-up ip rule add from 33.33.33.4/32 table dva
post-up ip rule add to 33.33.33.4/32 table dva
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
save and close and return to node 1
17d ) check out if all is ok >>>
root@devu1:~# cat /etc/iproute2/rt_tables && parallel-ssh -i -h ~/.dd cat /etc/iproute2/rt_tables
root@devu1:~# ls -lsa /etc/iproute2/rt_tables && parallel-ssh -i -h ~/.dd ls -lsa /etc/iproute2/rt_tables
root@devu1:~# cat /etc/network/interfaces&& parallel-ssh -i -h ~/.dd cat /etc/network/interfaces
root@devu1:~# ls -lsa /etc/network/interfaces&& parallel-ssh -i -h ~/.dd ls -lsa /etc/network/interfaces
17e) making routing in all nodes for all Kernel Routing IP tables (rt_tables) (node1 // node2 // node3)
root@devu1:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.2 table dvu
root@devu1:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu1:~# ip rule add from 11.11.11.2/32 table dvu
root@devu1:~# ip rule add to 11.11.11.2/32 table dvu
root@devu1:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.2 table dvv
root@devu1:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu1:~# ip rule add from 22.22.22.2/32 table dvv
root@devu1:~# ip rule add to 22.22.22.2/32 table dvv
root@devu1:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.2 table dva
root@devu1:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu1:~# ip rule add from 33.33.33.2/32 table dva
root@devu1:~# ip rule add to 33.33.33.2/32 table dva
root@devu1:~#
17f) check out in node1 if all is ok >>
root@devu1:~# ip route list table dvu
default via 11.11.11.1 dev eth1
11.11.0.0/16 dev eth1 scope link src 11.11.11.2
root@devu1:~# ip route list table dvv
default via 22.22.22.1 dev eth2
22.22.0.0/16 dev eth2 scope link src 22.22.22.2
root@devu1:~# ip route list table dva
default via 33.33.33.1 dev eth4
33.33.0.0/16 dev eth4 scope link src 33.33.33.2
root@devu1:~# ip rule show
0: from all lookup local
32760: from all to 33.33.33.2 lookup dva
32761: from 33.33.33.2 lookup dva
32762: from all to 22.22.22.2 lookup dvv
32763: from 22.22.22.2 lookup dvv
32764: from all to 11.11.11.2 lookup dvu
32765: from 11.11.11.2 lookup dvu
32766: from all lookup main
32767: from all lookup default
root@devu1:~#
17g) !!! IMPORTANT !!! repeat the Step 17e for node2 and node3 !!! pay attention for each diff er ent IP ADDRESS
>>>>> in node2
root@devu2:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.3 table dvu
root@devu2:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu2:~# ip rule add from 11.11.11.3/32 table dvu
root@devu2:~# ip rule add to 11.11.11.3/32 table dvu
root@devu2:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.3 table dvv
root@devu2:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu2:~# ip rule add from 22.22.22.3/32 table dvv
root@devu2:~# ip rule add to 22.22.22.3/32 table dvv
root@devu2:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.3 table dva
root@devu2:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu2:~# ip rule add from 33.33.33.3/32 table dva
root@devu2:~# ip rule add to 33.33.33.3/32 table dva
root@devu2:~#
>>>> in node 3
root@devu3:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.4 table dvu
root@devu3:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu3:~# ip rule add from 11.11.11.4/32 table dvu
root@devu3:~# ip rule add to 11.11.11.4/32 table dvu
root@devu3:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.4 table dvv
root@devu3:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu3:~# ip rule add from 22.22.22.4/32 table dvv
root@devu3:~# ip rule add to 22.22.22.4/32 table dvv
root@devu3:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.4 table dva
root@devu3:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu3:~# ip rule add from 33.33.33.4/32 table dva
root@devu3:~# ip rule add to 33.33.33.4/32 table dva
root@devu3:~#
17h) repeat CHECK OUT if all is ok, in node2 and node3 with those commands
:~# ip route list table dvu
:~# ip route list table dvv
:~# ip route list table dva
:~# ip rule show
######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########
18 ) reboot all your nodes
root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 19:14:20 [SUCCESS] root@devu3.devucluster.any:22
[2] 19:14:20 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# reboot
Broadcast message from root@devu1 (pts/0) (Tue Dec 19 19:14:27 2023):
The system is going down for reboot NOW!
root@devu1:~#
19) login again into node1
ssh root@devu1.devucluster.any -p 22
20) now test ping in all nodes each others, for each interfaces, from node1 to node2 and node3, from node2 to node1 and node3, form node3 to node1 and node2
20a) in node1 to node1 and node3
20a) continue >> in node1 to node1 and node3
20a) continue >> in node1 to node1 and node3
20b) in node2 to node1 and node3
20c) continue in node2 to node1 and node3
20d) in node3 to node2 and node1
20e) continue node3 to node1 and node2
20e) continue node3 to node1 and node2
21) disable service qemu-guest-agent in all nodes, if you don’t need it ( Installed by “ERROR” in this tutorial) root@devu1:~# rc-update del qemu-guest-agent && parallel-ssh -i -h ~/.dd rc-update del qemu-guest-agentistributed Folder for your Devuan-Cluster (Gluster) in all Cluster Nodes
root@devu1:~# mkdir -p /gluster-shared && parallel-ssh -i -h ~/.dd mkdir -p /gluster-shared
[
1
]
17:39:02
[
SUCCESS
]
root@devu2.devucluster.any:22
[
2
]
17:39:02
[
SUCCESS
]
root@devu3.devucluster.any:22
root@devu1:~#
28b) Check out if all is ok
root@devu1:~# ls -lsa /gluster-shared && parallel-ssh -i -h ~/.dd ls -lsa /gluster-shared
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:39 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:39 ..
[
1
]
17:39:53
[
SUCCESS
]
root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:38 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:38 ..
[
2
]
17:39:53
[
SUCCESS
]
root@devu2.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:39 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:39 ..
root@devu1:~#
28c) Create now your Shared Glusterfs for all your nodes.
!!
Important Choose a name for your Volume !!
root@devu1:~# gluster volume create
devuan-gluster
replica 3 transport tcp glus1.gluscluster.glu:/gluster-shared glus2.gluscluster.glu:/gluster-shared glus3.gluscluste
r.glu:/gluster-shared force
Now Test this Cluster Complete with All Service and Features, and Simulation a Fail Situation
71) Power off and power on your Cluster Nodes to Check in which status stay your Cluster Nodes.
72) Check first all service und features.
73) now the Cluster Nodes they are On again >> login in your node1
- ssh root@devu1.devucluster.any -p 22
- check status of your Devuan-Cluster
root@devu1:~# crm status
- check that all your service run in all Cluster Nodes with >> rc-status >> and >> netstat -tulpn
root@devu1:~# rc-status && parallel-ssh -i -h ~/.dd rc-status
- now with netstat -tulpn
root@devu1:~# netstat -tulpn && parallel-ssh -i -h ~/.dd netstat -tulpn
- is GlusterFS and WEBMIN runnig ? Then at the moment is all ok..
continue to mount your Shared File System GlusterFS in all Cluster Nodes( remember you can mount this into WEBMIN you can use the IP Virtual for more precision https://22.220.200.73:10000/)_
- first check if one GlusterFS was mounted on boot
root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h
- if not ( continue to mount in all Cluster Nodes )
- you are finnish with GlusterFS moutn, then Check if all is ok again
root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 824K 1.6G 1% /run
/dev/mapper/devu1--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
glus3.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
[1] 18:12:16 [SUCCESS] root@devu2.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 736K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 18:12:16 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 736K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@devu1:~#
- now run your Galera-Cluster in your Devuan-Cluster
_root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat
- now you can see in your output that all result they are = 0 >> it meant if you decide run the first Galera-Cluster node in node1 you must change this value to = 1
root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: 10
safe_to_bootstrap: 0
[1] 18:14:23 [SUCCESS] root@devu3.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: 9
safe_to_bootstrap: 0
[2] 18:14:23 [SUCCESS] root@devu2.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
root@devu1:~#
- now change this value safe_to_bootstrap: 0 >> toggle to = 1 >> safe_to_bootstrap: 1 >> in node1
root@devu1:~# nano /var/lib/mysql/grastate.dat
- save and close
- now run mariadb on node1
root@devu1:~# rc-service mariadb bootstrap
Bootstrapping the cluster: mariadbdStarting MariaDB database server: mariadbd.
root@devu1:~#
- check if all is ok in node1
root@devu1:~# rc-service mariadb status
/usr/bin/mariadb-admin Ver 9.1 Distrib 10.11.4-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Server version 10.11.4-MariaDB-1~deb12u1
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /run/mysqld/mysqld.sock
Uptime: 2 min 34 sec
Threads: 5 Questions: 70 Slow queries: 0 Opens: 40 Open tables: 30 Queries per second avg: 0.454.
root@devu1:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 42
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
1 row in set (0.001 sec)
MariaDB [(none)]>
- now run the others Galera-Clsuter nodes in node2 and node3, and should be all ok
root@devu1:~# parallel-ssh -i -h ~/.dd rc-service mariadb start
[1] 18:23:05 [SUCCESS] root@devu2.devucluster.any:22
Starting MariaDB database server: mariadbd . . ..
[2] 18:23:05 [SUCCESS] root@devu3.devucluster.any:22
Starting MariaDB database server: mariadbd . . ..
root@devu1:~#
- now check again to compare that all your Galera-Cluster nodes are running in your Devuan-Cluster
root@devu1:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 43
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)
MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| devuan_galera_test |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.001 sec)
MariaDB [(none)]>
- CONGRATULATION your Galera-Cluster and your Shared File System GlusterFS are running Clean and Solid.
- now let run your VM Devuan-Test in node1 (use your Virt-Manager on the side of Admin Device )
- Open your Virt-Manager and Connect with all your Cluster Nodes
- as I said at the beginning of the tutorial, the eth3 interface will be used for virtual machines, changes this in all nodes
- run now the VM in node1 ( because node1 will simulate a fail situation with poweroff )
- now make a test file into your Deavuan-Test VM
- you can see your VM Devuan-Test is running on node1 = devu1, now force shutdown your node1 and see what happen in your
Devuan-Clsuter “ remember your Virtual IP and Apache they are acitve in node1 “ it will be a good example.
root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 18:52:20 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu1.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 18:52:21 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online: [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]
Full List of Resources:
* IP-apache (ocf:heartbeat:IPaddr2): Started devu1.devucluster.any
* apache-rsc (ocf:heartbeat:apache): Started devu1.devucluster.any
root@devu1:~# poweroff -f
- please login in your node2 with other terminal tab and check your Cluster Status.
ssh root@devu2.devucluster.any -p 22
- now can you see your node1 is down, and your Virtual Ip and Apache resource is running on node2 and your Devuan-Test VM is down too.
root@devu2:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 18:55:38 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 18:55:38 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online: [ devu2.devucluster.any devu3.devucluster.any ]
* OFFLINE: [ devu1.devucluster.any ]
Full List of Resources:
* IP-apache (ocf:heartbeat:IPaddr2): Started devu2.devucluster.any
* apache-rsc (ocf:heartbeat:apache): Started devu2.devucluster.any
root@devu2:~#
- see your Devuan-Test VM is down
- now check Gluster status and Galere-Clsuter status
- GlusterFS first
root@devu2:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 748K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[1] 18:59:10 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 740K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 18:59:13 [FAILURE] root@devu1.devucluster.any:22 Exited with error code 255
Stderr: ssh: connect to host devu1.devucluster.any port 22: No route to host
root@devu2:~#
- now check Galera-Cluster Nodes status
root@devu2:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 40
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+
1 row in set (0.001 sec)
MariaDB [(none)]>
- now you can see, you have only 2 node running on Galera-Cluster now
- now run the Replicant Clone of your Devuan-Test VM in node2 and check the TEST file and his integrity
- now you can see, your Devuan-Test VM is running Clean and Solid without Problem in the other node2 because node1 is gone.
And the integrity of the file is complete.
- now turn on your node1 again a see what happen !
- first check status Cluster
root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 19:16:31 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 19:16:32 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]
Full List of Resources:
* IP-apache (ocf:heartbeat:IPaddr2): Started devu1.devucluster.any
* apache-rsc (ocf:heartbeat:apache): Started devu1.devucluster.any
root@devu1:~#
- now you can see your Virtual IP and Apache resource return again in node1
- Check GlusterFS status
root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 812K 1.6G 1% /run
/dev/mapper/devu1--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
glus3.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
[1] 19:20:06 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 740K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 19:20:06 [SUCCESS] root@devu2.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 792K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@devu1:~#
- Ooh ! There is a surprise for myself as Autor this Tutorial, in node1 is mounted Automatic on boot the Gluster Shared File System.
- now because your mariadb service not run on boot you must reconnect your Galera-Cluster Nodes.
- First check the status in all Cluster Nodes of safe_to_bootstrap:
root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
[1] 19:30:45 [SUCCESS] root@devu2.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
[2] 19:30:45 [SUCCESS] root@devu3.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
root@devu1:~#
- now the value of your node1 is = 0, it mean, toggle the value to = 1 and run again mariadb service only in node1, because in node2 and node3 is ready running
root@devu1:~# nano /var/lib/mysql/grastate.dat
GNU nano 7.2 /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 1
###############################################################################################################################*
- save and close, and start your mariadb service on node1
root@devu1:~# rc-service mariadb start
Starting MariaDB database server: mariadbd . . ..
root@devu1:~#
- check now the status of Galera-Cluster Nodes again
root@devu1:~# mysql -p -u root
Enter password:
*Welcome to the MariaDB monitor. Commands end with ; or \g.
*Your MariaDB connection id is 40
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)
MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| devuan_galera_test |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.001 sec)
MariaDB [(none)]>
- now you can see all Services and Features, in your Devuan-Cluster are running again.
################################################################################################################################ ################################################################################################################################ ################################################################################################################################
Voilà ! You have a Devuan-Cluster (all is not Prefect , but it run )
<<<<<<<< >>>>>>>> ######### = ##########I | Attachment | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|
EXT | 1 | manage | 59 K | 28 Dec 2023 - 16:58 | ElroijaH | |
EXT | 10 | manage | 43 K | 28 Dec 2023 - 17:22 | ElroijaH | |
EXT | 100 | manage | 99 K | 28 Dec 2023 - 20:03 | ElroijaH | |
EXT | 101 | manage | 105 K | 28 Dec 2023 - 20:04 | ElroijaH | |
EXT | 102 | manage | 48 K | 28 Dec 2023 - 20:06 | ElroijaH | |
EXT | 103 | manage | 37 K | 28 Dec 2023 - 20:08 | ElroijaH | |
EXT | 104 | manage | 21 K | 28 Dec 2023 - 20:09 | ElroijaH | |
EXT | 105 | manage | 17 K | 28 Dec 2023 - 20:12 | ElroijaH | |
EXT | 106 | manage | 26 K | 28 Dec 2023 - 20:14 | ElroijaH | |
EXT | 107 | manage | 17 K | 28 Dec 2023 - 20:15 | ElroijaH | |
EXT | 11 | manage | 97 K | 28 Dec 2023 - 17:24 | ElroijaH | |
EXT | 12 | manage | 49 K | 28 Dec 2023 - 17:26 | ElroijaH | |
EXT | 13 | manage | 51 K | 28 Dec 2023 - 17:27 | ElroijaH | |
EXT | 14 | manage | 49 K | 28 Dec 2023 - 17:29 | ElroijaH | |
EXT | 15 | manage | 33 K | 28 Dec 2023 - 17:30 | ElroijaH | |
EXT | 16 | manage | 75 K | 28 Dec 2023 - 17:31 | ElroijaH | |
EXT | 17 | manage | 86 K | 28 Dec 2023 - 17:36 | ElroijaH | |
EXT | 18 | manage | 39 K | 28 Dec 2023 - 17:38 | ElroijaH | |
EXT | 19 | manage | 27 K | 28 Dec 2023 - 17:40 | ElroijaH | |
EXT | 2 | manage | 46 K | 28 Dec 2023 - 17:06 | ElroijaH | |
EXT | 20 | manage | 25 K | 28 Dec 2023 - 17:41 | ElroijaH | |
EXT | 21 | manage | 47 K | 28 Dec 2023 - 17:44 | ElroijaH | |
EXT | 22 | manage | 51 K | 28 Dec 2023 - 17:46 | ElroijaH | |
EXT | 23 | manage | 67 K | 28 Dec 2023 - 17:51 | ElroijaH | |
EXT | 24 | manage | 70 K | 28 Dec 2023 - 17:52 | ElroijaH | |
EXT | 25 | manage | 122 K | 28 Dec 2023 - 17:53 | ElroijaH | |
EXT | 26 | manage | 74 K | 28 Dec 2023 - 17:54 | ElroijaH | |
EXT | 27 | manage | 73 K | 28 Dec 2023 - 17:56 | ElroijaH | |
EXT | 28 | manage | 121 K | 28 Dec 2023 - 17:57 | ElroijaH | |
EXT | 29 | manage | 73 K | 28 Dec 2023 - 17:58 | ElroijaH | |
EXT | 3 | manage | 46 K | 28 Dec 2023 - 17:08 | ElroijaH | |
EXT | 30 | manage | 73 K | 28 Dec 2023 - 18:01 | ElroijaH | |
EXT | 31 | manage | 121 K | 28 Dec 2023 - 18:02 | ElroijaH | |
EXT | 32 | manage | 121 K | 28 Dec 2023 - 18:09 | ElroijaH | |
EXT | 33 | manage | 73 K | 28 Dec 2023 - 18:10 | ElroijaH | |
EXT | 34 | manage | 73 K | 28 Dec 2023 - 18:12 | ElroijaH | |
EXT | 35 | manage | 95 K | 28 Dec 2023 - 18:16 | ElroijaH | |
EXT | 36 | manage | 94 K | 28 Dec 2023 - 18:18 | ElroijaH | |
EXT | 37 | manage | 132 K | 28 Dec 2023 - 18:19 | ElroijaH | |
EXT | 38 | manage | 39 K | 28 Dec 2023 - 18:20 | ElroijaH | |
EXT | 39 | manage | 64 K | 28 Dec 2023 - 18:21 | ElroijaH | |
EXT | 4 | manage | 25 K | 28 Dec 2023 - 17:12 | ElroijaH | |
EXT | 40 | manage | 123 K | 28 Dec 2023 - 18:26 | ElroijaH | |
EXT | 41 | manage | 24 K | 28 Dec 2023 - 18:28 | ElroijaH | |
EXT | 42 | manage | 59 K | 28 Dec 2023 - 18:29 | ElroijaH | |
EXT | 43 | manage | 76 K | 28 Dec 2023 - 18:30 | ElroijaH | |
EXT | 44 | manage | 22 K | 28 Dec 2023 - 18:31 | ElroijaH | |
EXT | 45 | manage | 83 K | 28 Dec 2023 - 18:33 | ElroijaH | |
EXT | 46 | manage | 57 K | 28 Dec 2023 - 18:34 | ElroijaH | |
EXT | 47 | manage | 96 K | 28 Dec 2023 - 18:36 | ElroijaH | |
EXT | 48 | manage | 57 K | 28 Dec 2023 - 18:37 | ElroijaH | |
EXT | 49 | manage | 132 K | 28 Dec 2023 - 18:38 | ElroijaH | |
EXT | 5 | manage | 102 K | 28 Dec 2023 - 17:14 | ElroijaH | |
EXT | 50 | manage | 65 K | 28 Dec 2023 - 18:45 | ElroijaH | |
EXT | 51 | manage | 87 K | 28 Dec 2023 - 18:46 | ElroijaH | |
EXT | 52 | manage | 82 K | 28 Dec 2023 - 18:47 | ElroijaH | |
EXT | 53 | manage | 134 K | 28 Dec 2023 - 18:49 | ElroijaH | |
EXT | 54 | manage | 39 K | 28 Dec 2023 - 18:50 | ElroijaH | |
EXT | 55 | manage | 53 K | 28 Dec 2023 - 18:51 | ElroijaH | |
EXT | 56 | manage | 49 K | 28 Dec 2023 - 18:54 | ElroijaH | |
EXT | 57 | manage | 50 K | 28 Dec 2023 - 18:55 | ElroijaH | |
EXT | 58 | manage | 14 K | 28 Dec 2023 - 18:57 | ElroijaH | |
EXT | 59 | manage | 73 K | 28 Dec 2023 - 18:58 | ElroijaH | |
EXT | 6 | manage | 115 K | 28 Dec 2023 - 17:15 | ElroijaH | |
EXT | 60 | manage | 70 K | 28 Dec 2023 - 18:58 | ElroijaH | |
EXT | 61 | manage | 71 K | 28 Dec 2023 - 18:59 | ElroijaH | |
EXT | 62 | manage | 58 K | 28 Dec 2023 - 19:00 | ElroijaH | |
EXT | 63 | manage | 61 K | 28 Dec 2023 - 19:01 | ElroijaH | |
EXT | 64 | manage | 87 K | 28 Dec 2023 - 19:04 | ElroijaH | |
EXT | 65 | manage | 131 K | 28 Dec 2023 - 19:06 | ElroijaH | |
EXT | 66 | manage | 128 K | 28 Dec 2023 - 19:08 | ElroijaH | |
EXT | 67 | manage | 68 K | 28 Dec 2023 - 19:10 | ElroijaH | |
EXT | 68 | manage | 43 K | 28 Dec 2023 - 19:11 | ElroijaH | |
EXT | 69 | manage | 43 K | 28 Dec 2023 - 19:12 | ElroijaH | |
EXT | 7 | manage | 115 K | 28 Dec 2023 - 17:16 | ElroijaH | |
EXT | 70 | manage | 135 K | 28 Dec 2023 - 19:13 | ElroijaH | |
EXT | 71 | manage | 114 K | 28 Dec 2023 - 19:14 | ElroijaH | |
EXT | 72 | manage | 98 K | 28 Dec 2023 - 19:15 | ElroijaH | |
EXT | 73 | manage | 122 K | 28 Dec 2023 - 19:17 | ElroijaH | |
EXT | 74 | manage | 95 K | 28 Dec 2023 - 19:19 | ElroijaH | |
EXT | 75 | manage | 55 K | 28 Dec 2023 - 19:20 | ElroijaH | |
EXT | 76 | manage | 64 K | 28 Dec 2023 - 19:21 | ElroijaH | |
EXT | 77 | manage | 62 K | 28 Dec 2023 - 19:29 | ElroijaH | |
EXT | 78 | manage | 62 K | 28 Dec 2023 - 19:31 | ElroijaH | |
EXT | 79 | manage | 66 K | 28 Dec 2023 - 19:32 | ElroijaH | |
EXT | 8 | manage | 44 K | 28 Dec 2023 - 17:18 | ElroijaH | |
EXT | 80 | manage | 58 K | 28 Dec 2023 - 19:33 | ElroijaH | |
EXT | 81 | manage | 41 K | 28 Dec 2023 - 19:35 | ElroijaH | |
EXT | 82 | manage | 60 K | 28 Dec 2023 - 19:37 | ElroijaH | |
EXT | 83 | manage | 53 K | 28 Dec 2023 - 19:38 | ElroijaH | |
EXT | 84 | manage | 46 K | 28 Dec 2023 - 19:39 | ElroijaH | |
EXT | 85 | manage | 44 K | 28 Dec 2023 - 19:41 | ElroijaH | |
EXT | 86 | manage | 47 K | 28 Dec 2023 - 19:42 | ElroijaH | |
EXT | 87 | manage | 57 K | 28 Dec 2023 - 19:43 | ElroijaH | |
EXT | 88 | manage | 54 K | 28 Dec 2023 - 19:45 | ElroijaH | |
EXT | 89 | manage | 49 K | 28 Dec 2023 - 19:46 | ElroijaH | |
EXT | 9 | manage | 43 K | 28 Dec 2023 - 17:20 | ElroijaH | |
EXT | 90 | manage | 59 K | 28 Dec 2023 - 19:47 | ElroijaH | |
EXT | 91 | manage | 84 K | 28 Dec 2023 - 19:49 | ElroijaH | |
EXT | 92 | manage | 38 K | 28 Dec 2023 - 19:53 | ElroijaH | |
EXT | 93 | manage | 50 K | 28 Dec 2023 - 19:54 | ElroijaH | |
EXT | 94 | manage | 35 K | 28 Dec 2023 - 19:56 | ElroijaH | |
EXT | 95 | manage | 86 K | 28 Dec 2023 - 19:57 | ElroijaH | |
EXT | 96 | manage | 47 K | 28 Dec 2023 - 19:59 | ElroijaH | |
EXT | 97 | manage | 62 K | 28 Dec 2023 - 20:00 | ElroijaH | |
EXT | 99 | manage | 86 K | 28 Dec 2023 - 20:01 | ElroijaH |