You are here: Devuan Wiki>Main Web>DevuanCluster (29 Dec 2023, ElroijaH)Edit Attach
-- ElroijaH - 28 Dec 2023

Devuan Cluster

Excuse me for my Grammatical English Errors:

This Guide is very long and not perfectly organized but with calm and patience perhaps you can appreciate the intention of this work

This Guide is here only to try inspire the all members of the Devuan Community to make Devuan Every Day Better than today, because my Knowledge are limited.

This cluster is not appoint to CyberSecurity , is only Try to Clustering with Devuan. This Cluster try to be a HA Cluster with Shared Replicant File System, inspired from a Concept, where ALL Nodes they are Master. This Cluster is not Perfect, but few Goals were reached:

a) Mariadb Cluster with 3 nodes (Galera Cluster)

b) Gluster Shared Files System with 3 Replicant

c) Support for QEMU/KVM Virtual Machines

d) Apache Server like a Cluster Resources in HA Cluster with a Virtual IP ( IP Floating ) for Administration with “WEBMIN” GUI MANAGER SERVER.

This Goals were NOT reached :

e) unfortunately , HA Virtual Machine (Qemu / KVM ) with virtual motion, is was not reached

f) unfortunately this method of Cluster is not automatized, it means the IT – System Admin, must in many details pay attention and Start or Restart services // mount or unmount Files Systems after boot.

Explain some Deatils :

g) with GlusterFS there is a way, in case that one of the nodes fail out, the replicant make a clone of each virtual disk (VM’s), and this method could to use, to turn on the same VM in another node almost inmediately (you must to do this manually), I will show this in the Next Steps

h) this cluster run with OpenRC, in my opinion (and Experience), OpenRC is stable, solid, and when OpenRC make something, make it Good.

i) not IpTables or UFW or FirewallD, not Selinux ( nothing with Security )

I am NOT a Expert in Administration, Networking , or Development, all content of this Guide don’t be must take like a inflexible true, all this content could be change, this Guide is here only to try inspire the Devuan Community , because my Knowledge are limited.

Hardware

3 x SFF pc , 3 x low profile Network Card 1 Gbit/s with 4 ports, 4 x Switch 1 Gbit/s (min. 5 ports) , 16 GB ram x SFF, CPU Intel CORE DUO 3,00 GHz, SSD 250 GB, Many Networks Cables. Router or Firewalls Hardware with a Local DNS to resolve name. ( OpenWRT for Example ) OpenWRT could be Installed in a SFF pc with a network card. Let run your Creativity. ( I will not talk about a Configuration for Firewall Hardware here )

Operating System

Devuan Daedalus 5.0

Packages :

Corosync // Pacemaker // crmsh // pssh (Parallel SSH) // Webmin // Qemu/KVM //

Virt-Manager (only on the side of ADMIN “client” ) // Apache // Gluster // MariaDB (GALERA CLUSTER). //

Networking:

Every one can decide how make the Networking Configuration for example, I did make it on this way :

5 Interfaces for different packages traffic Services ( maybe it is wrong ) but I try to have a clean and efficient packages traffics

Eth0 =

Adminnistration (SSH) and Eth0 Internet 22.220.200.0/24

Eth1 =

Corosync 11.11.0.0/16

Eth2 =

Gluster 22.22.0.0/16

Eth3 =

for Virtual Machines (recommend macvtap) could put on the same IP range like SSH and Internet or in another Network Segment.

Eth4 =

Galera 33.33.0.0/16


In your local DHCP server you must to configure pc 1, pc 2 , pc 3 with a static IP ADDRESS in relation with each different MAC ADDRESS and in your local DNS server you will configure a domain

( name what you want ) I take devucluster.any for ssh administration and internet ( apt install etc. etc .etc. or Lynx ( browser in your CLI “command Line interface” ) ;

then give a resolve name for each host :

for Example :

IP 22.220.200.70 host devu1 >> Domain > devucluster.any > host devu1
IP 22.220.200.71 host devu2 >> Domain > devucluster.any > host devu2
IP 22.220.200.72 host devu3 >> Domain > devucluster.any > host devu3

==============================================================================================================================================================

1) Login in your node1 >>

ssh root@devu1.devucluster.any -p 22

2) make Pair ssh Keys in you node1 >>

root@devu1:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:D6ftaBpL4DGK9PFycFPCN/xRz8iHfk/5x0+BS/LmuFU root@devu1
The key's randomart image is:
+---[RSA 3072]----+
| . |
| . . o = |
| o = . + + |
| + o o . . .|
| . o+o S..o + E |
|...o=+. * = =.o|
|. .o.oo . o = .=|
| o. o.o = .o|
| oo. .o.. .|
+----[SHA256]-----+
root@devu1:~#

3) send keys id to node2 and node3 continue >>>

3) send pubkey from node1 to node2

root@devu1:~# ssh-copy-id root@devu2.devucluster.any
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'devu2.devucluster.any (22.220.200.71)' can't be established.
RSA key fingerprint is SHA256:Lk9F2848nHbgVQPuXe7Bs119LZrxKV3oOxXbE6SkbRM.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

root@devu2.devucluster.any's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@devu2.devucluster.any'"
and check to make sure that only the key(s) you wanted were added.

3

4) send pubkey from node1 to node3

root@devu1:~# ssh-copy-id root@devu3.devucluster.any
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'devu3.devucluster.any (22.220.200.72)' can't be established.
RSA key fingerprint is SHA256:Gb+x6CTRwRxYHot5bzYwGz+0Ug9m6C53s80wcniC0x4.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@devu3.devucluster.any's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@devu3.devucluster.any'"
and check to make sure that only the key(s) you wanted were added.

###### AND NOW BEGIN THE PARTY CLI ( COMMAND LINE INTERFACE ) #######

5 ) open 3 tabs in your terminal ( if you want )


Tab 1° your are ready logged in node1 ( devu1.devucluster.any )

Tab 2° login to node2 ssh
root@devu2.devucluster.any -p 22

3° login to node3 ssh root@devu3.devucluster.any -p 22

4

6) repeat step 2, 3 , 4 pair ssh key and send the keys to each other nodes, from node2 to node3 node1, and from node3 to node1 node2.

5

6a ) send to node3

6

6c ) make the same on node3

7

7) make a file .dd in all nodes ( this is for #pssh# parallel-ssh ) but please pay attention, it is differents for each node ( you decide the name of this file, would be better a very short name )

## >>>> node 1 >>>>


cd ~/ && nano .dd

root@devu2.devucluster.any:22

root@devu3.devucluster.any:22

## >>> save and close

## >>>> node 2 >>>>

cd ~/ && nano .dd

root@devu1.devucluster.any:22

root@devu3.devucluster.any:22

## >>> save and close

## >>>> node 3 >>>>

cd ~/ && nano .dd

root@devu1.devucluster.any:22

root@devu2.devucluster.any:22

## >>> save and close

8

7b) in node2

9

7c) in node3

8) now install all .deb that your Cluster need, in each nodes,( node 1 , node 2 , node 3 ) and be patient…

!!!! Important !!!! before install packages edit and comment >> /etc/apt/sources.list and all 3 nodes >> for example

#deb cdrom:[Devuan GNU/Linux 5.0.1 daedalus amd64 - server 20230914]/ daedalus contrib main non-free non-free-firmware

8a) Example for each nodes

11

8b) continue to install >>>

apt update && apt install corosync pacemaker pcs crmsh pssh mariadb-server mariadb-client glusterfs-server glusterfs-client apache2 grub-firmware-qemu ipxe-qemu libnss-libvirt libqcow-utils libqcow1 libvirt-clients libvirt-clients-qemu libvirt-daemon libvirt-daemon-config-network libvirt-daemon-config-nwfilter libvirt-daemon-driver-lxc libvirt-daemon-driver-qemu libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-daemon-driver-storage-rbd libvirt-daemon-driver-storage-zfs libvirt-daemon-driver-vbox libvirt-daemon-driver-xen libvirt-daemon-system libvirt-daemon-system-sysv libvirt-login-shell libvirt-sanlock libvirt-wireshark libvirt0 qemu-block-extra qemu-efi qemu-efi-aarch64 qemu-efi-arm qemu-guest-agent qemu-system qemu-system-arm qemu-system-common qemu-system-data qemu-system-gui qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-sparc qemu-system-x86 qemu-system-xen qemu-user qemu-user-binfmt qemu-utils -y

8c) in node 1

12

8d) in node 2

13

8e) in node 3

14

9) and now from node1, reboot all your nodes

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 13:35:00 [SUCCESS] root@devu2.devucluster.any:22
[2] 13:35:00 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/0) (Tue Dec 19 13:35:10 2023):

The system is going down for reboot NOW!
root@devu1:~# Connection to devu1.devucluster.any closed by remote host.
Connection to devu1.devucluster.any closed.

15

10 ) login again in your node1 as root

ssh root@devu1.devucluster.any -p 22

11) now create a new edited /etc/hosts file in all your Cluster Nodes

root@devu1:~# ls -lsa /etc/hosts
4 -rw-r--r-- 1 root root 207 Dec 20 06:50 /etc/hosts

11a now delete first the /etc/hosts file and continue to create a new /etc/hosts file

root@devu1:~# rm /etc/hosts

11b) edit your new /etc/hosts file

root@devu1:~# nano /etc/hosts

12) check if all is ok

root@devu1:~# ls -lsa /etc/hosts
4 -rw-r--r-- 1 root root 831 Dec 20 08:07 /etc/hosts

root@devu1:~#

>>> and then edit…

nano /etc/hosts #####(edit with names and IP)#####

127.0.0.1 localhost
127.0.1.1 devu1.devucluster.any devu1

22.220.200.70
devu1.devucluster.any devu1
22.220.200.71
devu2.devucluster.any devu2
22.220.200.72
devu3.devucluster.any devu3

11.11.11.2
coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3

22.22.22.2
glus1.gluscluster.glu glus1
22.22.22.
3glus2.gluscluster.glu glus2
22.22.22.
4glus3.gluscluster.glu glus3

33.33.33.2
gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3


# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


#######################################################################################################################################################################

save and close.

13) Example on node1

16

14 ) first delete /etc/hosts file in node2 and node3 and then send new edited file /etc/hosts/ to node2 and node3 >>>>

root@devu1:~# parallel-ssh -i -h ~/.dd rm /etc/hosts
[1] 13:53:46 [SUCCESS] root@devu3.devucluster.any:22
[2] 13:53:46 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# scp /etc/hosts root@devu2.devucluster.any:/etc/
hosts 100% 867 1.4MB/s 00:00
root@devu1:~# scp /etc/hosts root@devu3.devucluster.any:/etc/
hosts 100% 867 1.2MB/s 00:00
root@devu1:~#

15 ) check out if all is ok ! >>>

!!! pay attention !!! there is differences between node 1, node2, node3

127.0.1.1 devu1.devucluster.any devu1

127.0.1.1 devu 2.devucluster.any devu2

127.0.1.1 devu 3.devucluster.any devu3

============================================================================================================================================================================

root@devu1:~# cat /etc/hosts && parallel-ssh -i -h ~/.dd cat /etc/hosts

127.0.0.1 localhost
127.0.1.1 devu1.devucluster.any devu1

22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3

11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3

22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3

33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3


# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

[1] 13:55:53 [SUCCESS] root@devu2.devucluster.any:22
127.0.0.1 localhost
127.0.1.1 devu 2.devucluster.any devu1

22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3

11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3

22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3

33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3


# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

[2] 13:55:53 [SUCCESS] root@devu3.devucluster.any:22
127.0.0.1 localhost
127.0.1.1 devu 3.devucluster.any devu1

22.220.200.70 devu1.devucluster.any devu1
22.220.200.71 devu2.devucluster.any devu2
22.220.200.72 devu3.devucluster.any devu3

11.11.11.2 coro1.corocluster.cor coro1
11.11.11.3 coro2.corocluster.cor coro2
11.11.11.4 coro3.corocluster.cor coro3

22.22.22.2 glus1.gluscluster.glu glus1
22.22.22.3 glus2.gluscluster.glu glus2
22.22.22.4 glus3.gluscluster.glu glus3

33.33.33.2 gale1.galecluster.gal gale1
33.33.33.3 gale2.galecluster.gal gale2
33.33.33.4 gale3.galecluster.gal gale3


# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

root@devu1:~#

17

15a) reboot all your nodes

18

15b) login again in your node1 as root

ssh root@devu1.devucluster.any -p 22

16 ) you need administration in your Devuan Cluster and the same time Internet, “if you decide it”, let us “ Networking “ now in your Devaun Cluster >>>

!!! Important in the output mac address in this guide, they are Fake !!

check out of all interface are up

16a) root@devu1:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:11:bb:22:cc:dd brd ff:ff:ff:ff:ff:ff
root@devu1:~#

16a) delete and after create and edit >>>


16b) root@devu1:~# rm /etc/network/interfaces && parallel-ssh -i -h ~/.dd rm /etc/network/interfaces
[1] 17:15:52 [SUCCESS] root@devu2.devucluster.any:22
[2] 17:15:52 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# nano /etc/network/interfaces

root@devu1:~# parallel-ssh -i -h ~/.dd rm /etc/iproute2/rt_tables
[1] 18:38:54 [SUCCESS] root@devu3.devucluster.any:22
[2] 18:38:54 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~#

19

###########################################################################################################################################################################

20

16c) now edit your /etc/network/interfaces file >>

nano /etc/network/interfaces

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ########## #############################

# This file describes the network interfaces available on your system

# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface

auto lo

iface lo inet loopback

# The primary network interface

allow-hotplug eth0

iface eth0 inet static

address 22.220.200.70

netmask 255.255.255.0

gateway 22.220.200.1

broadcast 22.220.200.255

allow-hotplug eth1

iface eth1 inet static

address 11.11.11.2

netmask 255.255.0.0

#broadcast 11.11.255.255

#gateway 11.11.11.1

post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.2 table dvu

post-up ip route add default via 11.11.11.1 dev eth1 table dvu

post-up ip rule add from 11.11.11.2/32 table dvu

post-up ip rule add to 11.11.11.2/32 table dvu

allow-hotplug eth2

iface eth2 inet static

address 22.22.22.2

netmask 255.255.0.0

#broadcast 22.22.255.255

#gateway 22.22.22.1

post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.2 table dvv

post-up ip route add default via 22.22.22.1 dev eth2 table dvv

post-up ip rule add from 22.22.22.2/32 table dvv

post-up ip rule add to 22.22.22.2/32 table dvv

allow-hotplug eth4

iface eth4 inet static

address 33.33.33.2

netmask 255.255.0.0

#broadcast 33.33.255.255

#gateway 33.33.33.1

post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.2 table dva

post-up ip route add default via 33.33.33.1 dev eth4 table dva

post-up ip rule add from 33.33.33.2/32 table dva

post-up ip rule add to 33.33.33.2/32 table dva

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

save and close


16c ) now edit your /etc/iproute2/rt_tables file >>>> !!! remember you are still login in node1 !!!


21

16d) now send your /etc/iproute2/rt_table file and your /etc/network/interfaces to node2 node3

!!! Pay attention !!!important after you sent those files you must to modificate IP address in /etc/network/interfaces on node 2 and node 3 for each interface >> eth0 eth1 eth2 eth4 >> remember eht3 will be for VM in the next Steps

root@devu1:~# scp /etc/iproute2/rt_tables root@devu2.devucluster.any:/etc/iproute2
rt_tables 100% 109 160.1KB/s 00:00
root@devu1:~# scp /etc/iproute2/rt_tables root@devu3.devucluster.any:/etc/iproute2
rt_tables 100% 109 140.6KB/s 00:00
root@devu1:~# scp /etc/network/interfaces root@devu2.devucluster.any:/etc/network
interfaces 100% 1635 2.2MB/s 00:00
root@devu1:~# scp /etc/network/interfaces root@devu3.devucluster.any:/etc/network
interfaces 100% 1635 2.2MB/s 00:00
root@devu1:~#

22

17)now check out if all is ok, first >> login in node2

root@devu1:~# ssh devu2.devucluster.any
Linux devu2 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Dec 19 18:56:26 2023 from 22.220.200.70
root@devu2:~#

17a) Edit /etc/network/interfaces and change the IP address

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

# This file describes the network interfaces available on your system

# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface

auto lo

iface lo inet loopback

# The primary network interface

allow-hotplug eth0

iface eth0 inet static

address 22.220.200.71

netmask 255.255.255.0

gateway 22.220.200.1

broadcast 22.220.200.255

allow-hotplug eth1

iface eth1 inet static

address 11.11.11.3

netmask 255.255.0.0

#broadcast 11.11.255.255

#gateway 11.11.11.1

post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.3 table dvu

post-up ip route add default via 11.11.11.1 dev eth1 table dvu

post-up ip rule add from 11.11.11.3/32 table dvu

post-up ip rule add to 11.11.11.3/32 table dvu

allow-hotplug eth2

iface eth2 inet static

address 22.22.22.3

netmask 255.255.0.0

#broadcast 22.22.255.255

#gateway 22.22.22.1

post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.3 table dvv

post-up ip route add default via 22.22.22.1 dev eth2 table dvv

post-up ip rule add from 22.22.22.3/32 table dvv

post-up ip rule add to 22.22.22.3/32 table dvv

allow-hotplug eth4

iface eth4 inet static

address 33.33.33.3

netmask 255.255.0.0

#broadcast 33.33.255.255

#gateway 33.33.33.1

post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.3 table dva

post-up ip route add default via 33.33.33.1 dev eth4 table dva

post-up ip rule add from 33.33.33.3/32 table dva

post-up ip rule add to 33.33.33.3/32 table dva

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

save and close and return to node1

root@devu2:~# exit

17b) now check out if all is ok >>> now login in node 3

root@devu1:~# ssh devu3.devucluster.any
Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Dec 19 11:35:38 2023 from 22.220.200.2
root@devu3:~#


17c) Edit /etc/network/interfaces and change the IP address

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

# This file describes the network interfaces available on your system

# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface

auto lo

iface lo inet loopback

# The primary network interface

allow-hotplug eth0

iface eth0 inet static

address 22.220.200.72

netmask 255.255.255.0

gateway 22.220.200.1

broadcast 22.220.200.255

allow-hotplug eth1

iface eth1 inet static

address 11.11.11.4

netmask 255.255.0.0

#broadcast 11.11.255.255

#gateway 11.11.11.1

post-up ip route add 11.11.0.0/16 dev eth1 src 11.11.11.4 table dvu

post-up ip route add default via 11.11.11.1 dev eth1 table dvu

post-up ip rule add from 11.11.11.4/32 table dvu

post-up ip rule add to 11.11.11.4/32 table dvu

allow-hotplug eth2

iface eth2 inet static

address 22.22.22.4

netmask 255.255.0.0

#broadcast 22.22.255.255

#gateway 22.22.22.1

post-up ip route add 22.22.0.0/16 dev eth2 src 22.22.22.4 table dvv

post-up ip route add default via 22.22.22.1 dev eth2 table dvv

post-up ip rule add from 22.22.22.4/32 table dvv

post-up ip rule add to 22.22.22.4/32 table dvv

allow-hotplug eth4

iface eth4 inet static

address 33.33.33.4

netmask 255.255.0.0

#broadcast 33.33.255.255

#gateway 33.33.33.1

post-up ip route add 33.33.0.0/16 dev eth4 src 33.33.33.4 table dva

post-up ip route add default via 33.33.33.1 dev eth4 table dva

post-up ip rule add from 33.33.33.4/32 table dva

post-up ip rule add to 33.33.33.4/32 table dva

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

save and close and return to node 1

17d ) check out if all is ok >>>

root@devu1:~# cat /etc/iproute2/rt_tables && parallel-ssh -i -h ~/.dd cat /etc/iproute2/rt_tables

root@devu1:~# ls -lsa /etc/iproute2/rt_tables && parallel-ssh -i -h ~/.dd ls -lsa /etc/iproute2/rt_tables

root@devu1:~# cat /etc/network/interfaces&& parallel-ssh -i -h ~/.dd cat /etc/network/interfaces

root@devu1:~# ls -lsa /etc/network/interfaces&& parallel-ssh -i -h ~/.dd ls -lsa /etc/network/interfaces

17e) making routing in all nodes for all Kernel Routing IP tables (rt_tables) (node1 // node2 // node3)

root@devu1:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.2 table dvu
root@devu1:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu1:~# ip rule add from 11.11.11.2/32 table dvu
root@devu1:~# ip rule add to 11.11.11.2/32 table dvu

root@devu1:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.2 table dvv
root@devu1:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu1:~# ip rule add from 22.22.22.2/32 table dvv
root@devu1:~# ip rule add to 22.22.22.2/32 table dvv

root@devu1:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.2 table dva
root@devu1:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu1:~# ip rule add from 33.33.33.2/32 table dva
root@devu1:~# ip rule add to 33.33.33.2/32 table dva
root@devu1:~#

23

17f) check out in node1 if all is ok >>

root@devu1:~# ip route list table dvu
default via 11.11.11.1 dev eth1
11.11.0.0/16 dev eth1 scope link src 11.11.11.2
root@devu1:~# ip route list table dvv
default via 22.22.22.1 dev eth2
22.22.0.0/16 dev eth2 scope link src 22.22.22.2
root@devu1:~# ip route list table dva
default via 33.33.33.1 dev eth4
33.33.0.0/16 dev eth4 scope link src 33.33.33.2
root@devu1:~# ip rule show
0: from all lookup local
32760: from all to 33.33.33.2 lookup dva
32761: from 33.33.33.2 lookup dva
32762: from all to 22.22.22.2 lookup dvv
32763: from 22.22.22.2 lookup dvv
32764: from all to 11.11.11.2 lookup dvu
32765: from 11.11.11.2 lookup dvu
32766: from all lookup main
32767: from all lookup default
root@devu1:~#

24

17g) !!! IMPORTANT !!! repeat the Step 17e for node2 and node3 !!! pay attention for each diff er ent IP ADDRESS

>>>>> in node2

root@devu2:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.3 table dvu
root@devu2:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu2:~# ip rule add from 11.11.11.
3/32 table dvu
root@devu2:~# ip rule add to 11.11.11.
3/32 table dvu

root@devu2:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.3 table dvv
root@devu2:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu2:~# ip rule add from 22.22.22.
3/32 table dvv
root@devu2:~# ip rule add to 22.22.22.
3/32 table dvv

root@devu2:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.3 table dva
root@devu2:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu2:~# ip rule add from 33.33.33.
3/32 table dva
root@devu2:~# ip rule add to 33.33.33.
3/32 table dva
root@devu2:~#

>>>> in node 3

root@devu3:~# ip route add 11.11.0.0/16 dev eth1 src 11.11.11.4 table dvu
root@devu3:~# ip route add default via 11.11.11.1 dev eth1 table dvu
root@devu3:~# ip rule add from 11.11.11.
4/32 table dvu
root@devu3:~# ip rule add to 11.11.11.
4/32 table dvu

root@devu3:~# ip route add 22.22.0.0/16 dev eth2 src 22.22.22.4 table dvv
root@devu3:~# ip route add default via 22.22.22.1 dev eth2 table dvv
root@devu3:~# ip rule add from 22.22.22.
4/32 table dvv
root@devu3:~# ip rule add to 22.22.22.
4/32 table dvv

root@devu3:~# ip route add 33.33.0.0/16 dev eth4 src 33.33.33.4 table dva
root@devu3:~# ip route add default via 33.33.33.1 dev eth4 table dva
root@devu3:~# ip rule add from 33.33.33.
4/32 table dva
root@devu3:~# ip rule add to 33.33.33.
4/32 table dva
root@devu3:~#

17h) repeat CHECK OUT if all is ok, in node2 and node3 with those commands

:~# ip route list table dvu

:~# ip route list table dvv

:~# ip route list table dva

:~# ip rule show

######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ######## <<<<<<<< >>>>>>>> ######### = ##########

18 ) reboot all your nodes

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 19:14:20 [SUCCESS] root@devu3.devucluster.any:22
[2] 19:14:20 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/0) (Tue Dec 19 19:14:27 2023):

The system is going down for reboot NOW!
root@devu1:~#

19) login again into node1

ssh root@devu1.devucluster.any -p 22

20) now test ping in all nodes each others, for each interfaces, from node1 to node2 and node3, from node2 to node1 and node3, form node3 to node1 and node2

20a) in node1 to node1 and node3

25

20a) continue >> in node1 to node1 and node3

26

20a) continue >> in node1 to node1 and node3

27

20b) in node2 to node1 and node3

28

20c) continue in node2 to node1 and node3

30

20d) in node3 to node2 and node1

32

20e) continue node3 to node1 and node2

33

20e) continue node3 to node1 and node2

34

21) disable service qemu-guest-agent in all nodes, if you don’t need it ( Installed by “ERROR” in this tutorial)

root@devu1:~# rc-update del qemu-guest-agent && parallel-ssh -i -h ~/.dd rc-update del qemu-guest-agent
* service qemu-guest-agent removed from runlevel default
[1] 17:10:07 [SUCCESS] root@devu3.devucluster.any:22
* service qemu-guest-agent removed from runlevel default
[2] 17:10:07 [SUCCESS] root@devu2.devucluster.any:22
* service qemu-guest-agent removed from runlevel default
root@devu1:~#

22) unfortunately when glusterfs was installed, there is not a init file automatic generated >> in /etc/init.d/, you must to create a init file for glusterfs sevice.

22a) check out in all nodes before to continue >>

root@devu1:~# rc-status && parallel-ssh -i -h ~/.dd rc-status

35

22b) create your glusterfs init with this example >>

:~$# nano /etc/init.d/glusterd

################################################################################################################################

#! /bin/sh

#

### BEGIN INIT INFO

# Provides: glusterd

# Required-Start: $network $remote_fs $syslog

# Required-Stop: $network $remote_fs $syslog

# Default-Start: 2 3 4 5

# Default-Stop: 0 1 6

# Short-Description: glusterd

# Description: glusterd

### END INIT INFO

# Author: yourself

# PATH should only include /usr/* if it runs after the mountnfs.sh script

PATH=/usr/sbin:/usr/bin:/sbin:/bin

DESC="glusterd daemon"

NAME=glusterd

DAEMON=/usr/sbin/$NAME

OPTIONS=""

PIDFILE=/var/run/$NAME.pid

SCRIPTNAME=/etc/init.d/$NAME

PIDFILE=/var/run/glusterd.pid

RARUNDIR=/var/run/resource-agents

# Exit if the package is not installed

[ -x "$DAEMON" ] || exit 0

# Read configuration variable file if it is present

[ -r /etc/default/glusterd ] && . /etc/default/glusterd

# Make sure the Resource Agents run dir exists. Otherwise create it.

[ -d "$RARUNDIR" ] || mkdir -p $RARUNDIR

# Define LSB log_* functions.

# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.

. /lib/lsb/init-functions

#

# Function that starts the daemon/service

#

do_start()

{

# Return

# 0 if daemon has been started

# 1 if daemon was already running

# 2 if daemon could not be started

start-stop-daemon --start --quiet --exec $DAEMON --test > /dev/null \

|| return 1

start-stop-daemon --start --quiet --exec $DAEMON -- $OPTIONS \

|| return 2

# Add code here, if necessary, that waits for the process to be ready

# to handle requests from services started subsequently which depend

# on this one. As a last resort, sleep for some time.

pidof glusterd > $PIDFILE

}

#

# Function that stops the daemon/service

#

do_stop()

{

# Return

# 0 if daemon has been stopped

# 1 if daemon was already stopped

# 2 if daemon could not be stopped

# other if a failure occurred

start-stop-daemon --stop --quiet --retry forever/QUIT/1 --pidfile $PIDFILE

RETVAL="$?"

[ "$RETVAL" = 2 ] && return 2

# Many daemons don't delete their pidfiles when they exit.

rm -f $PIDFILE

return "$RETVAL"

}

case "$1" in

start)

log_daemon_msg "Starting $DESC" "$NAME"

do_start

case "$?" in

0|1) log_end_msg 0 ;;

2) log_end_msg 1 ;;

esac

;;

stop)

log_daemon_msg "Stopping $DESC" "$NAME"

do_stop

case "$?" in

0|1) log_end_msg 0 ;;

2) log_end_msg 1 ;;

esac

;;

restart|force-reload)

log_daemon_msg "Restarting $DESC" "$NAME"

do_stop

case "$?" in

0|1)

do_start

case "$?" in

0) log_end_msg 0 ;;

1) log_end_msg 1 ;; # Old process is still running

*) log_end_msg 1 ;; # Failed to start

esac

;;

*)

# Failed to stop

log_end_msg 1

;;

esac

;;

status|monitor)

status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $?

;;

*)

echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2

exit 3

;;

esac

:

################################################################################################################################

save and close

22c) !!! pay attention !!! make this file >> /etc/init.d/glusterd .sh executable


root@devu1:~# ls -lsa /etc/init.d/glusterd
4 -rw-r--r-- 1 root root 2782 Dec 21 17:28 /etc/init.d/glusterd
root@devu1:~# chmod 744 /etc/init.d/glusterd
root@devu1:~# ls -lsa /etc/init.d/glusterd
4 -rwxr--r-- 1 root root 2782 Dec 21 17:28 /etc/init.d/glusterd
root@devu1:~#

22d) send this file ( /etc/init.d/glusterd/ ) to node2 and node3

root@devu1:~# scp /etc/init.d/glusterd root@devu2.devucluster.any:/etc/init.d/
glusterd 100% 2782 3.5MB/s 00:00
root@devu1:~# scp /etc/init.d/glusterd root@devu3.devucluster.any:/etc/init.d/
glusterd 100% 2782 3.8MB/s 00:00
root@devu1:~#

22e) check out if all is ok in all your nodes ( .sh is executable ) ?

root@devu1:~# ls -lsa /etc/init.d/glusterd && parallel-ssh -i -h ~/.dd ls -lsa /etc/init.d/glusterd
4 -rwxr--r-- 1 root root 2782 Dec 21 17:28 /etc/init.d/glusterd
[1] 17:36:11 [SUCCESS] root@devu2.devucluster.any:22
4 -rwxr--r-- 1 root root 2782 Dec 21 17:34 /etc/init.d/glusterd
[2] 17:36:11 [SUCCESS] root@devu3.devucluster.any:22
4 -rwxr--r-- 1 root root 2782 Dec 21 17:34 /etc/init.d/glusterd
root@devu1:~#

22e) add glusterd to runlevel default in all your nodes >>>

root@devu1:~# rc-update add glusterd && parallel-ssh -i -h ~/.dd rc-update add glusterd
* service glusterd added to runlevel default
[1] 17:46:08 [SUCCESS] root@devu2.devucluster.any:22
* service glusterd added to runlevel default
[2] 17:46:08 [SUCCESS] root@devu3.devucluster.any:22
* service glusterd added to runlevel default
root@devu1:~#


22f) if all is ok your glusterd file for glusterfs should run in all your nodes

23 ) reboot all your nodes

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 17:38:30 [SUCCESS] root@devu2.devucluster.any:22
[2] 17:38:30 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/0) (Thu Dec 21 17:38:32 2023):

The system is going down for reboot NOW!
root@devu1:~#

24) check out >> with rc-status and netstat -tulpn for all your nodes

login in your node1 again

ssh root@devu1.devucluster.any -p 22

root@devu1:~# rc-status && parallel-ssh -i -h ~/.dd rc-status

36

24a) now netstat -tulpn for all your nodes !!! Important the Configuration of Shared File System Gluster will come in the next Step !!!

root@devu1:~# netstat -tulpn && parallel-ssh -i -h ~/.dd netstat -tulpn

37

25 ) Configuration Cluster ( Corosync and Pacemaker )

25a) First change the Password for hacluster !! in all nodes !! , second check of the file in exist >>> /var/lib/pcsd/known-hosts (if not , please, create it) nano /var/lib/pcsd/known-hosts or touch /var/lib/pcsd/known-hosts (let it empty ) >> close and save.

-!!! Apply this command in all your nodes !!!

root@devu1:~# passwd hacluster
New password:
Retype new password:
passwd: password updated successfully

- Check if this file is there, if not, continue to Create this file in all your nodes (/var/lib/pcsd/known-hosts)

root@devu1:~# ls -lsa /var/lib/pcsd/known-hosts
ls: cannot access ' /var/lib/pcsd/known-hosts': No such file or directory
root@devu1:~# parallel-ssh -i -h ~/.dd ls -lsa /var/lib/pcsd/known-hosts
[1] 15:45:19 [FAILURE] root@devu3.devucluster.any:22 Exited with error code 127
Stderr: bash: line 1: ls: command not found
[2] 15:45:19 [FAILURE] root@devu2.devucluster.any:22 Exited with error code 127
Stderr: bash: line 1: ls: command not found
root@devu1:~#

- if the file is not there you must create

root@devu1:~# touch /var/lib/pcsd/known-hosts && parallel-ssh -i -h ~/.dd touch /var/lib/pcsd/known-hosts
[1] 15:42:40 [SUCCESS] root@devu2.devucluster.any:22
[2] 15:42:40 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# ls -lsa /var/lib/pcsd/known-hosts && parallel-ssh -i -h ~/.dd ls -lsa /var/lib/pcsd/known-hosts
0 -rw-r--r-- 1 root root 0 Dec 24 15:42 /var/lib/pcsd/known-hosts
[1] 15:43:14 [SUCCESS] root@devu2.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 24 15:42 /var/lib/pcsd/known-hosts
[2] 15:43:14 [SUCCESS] root@devu3.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 24 15:42 /var/lib/pcsd/known-hosts
root@devu1:~#

25b) continue with the Devuan Cluster Configuration

root@devu1:~# pcs host auth devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any

38

25c) Would be good if you make cat in node1 for /var/lib/pcsd/known-hosts

39

25d) make a copy of your corosync.conf file in all cluster nodes, next remove your original corosync.conf in all cluster nodes and make a new empty corosync.conf file, in all cluster nodes.

root@devu1:~# cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf.old && parallel-ssh -i -h ~/.dd cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf.old
[1] 16:03:07 [SUCCESS] root@devu2.devucluster.any:22
[2] 16:03:07 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# rm /etc/corosync/corosync.conf && parallel-ssh -i -h ~/.dd rm /etc/corosync/corosync.conf
[1] 16:03:55 [SUCCESS] root@devu2.devucluster.any:22
[2] 16:03:55 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# touch /etc/corosync/corosync.conf && parallel-ssh -i -h ~/.dd touch /etc/corosync/corosync.conf
[1] 16:04:07 [SUCCESS] root@devu3.devucluster.any:22
[2] 16:04:08 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# ls -lsa /etc/corosync/corosync.conf && parallel-ssh -i -h ~/.dd ls -lsa /etc/corosync/corosync.conf
0 -rw-r--r-- 1 root root 0 Dec 24 16:04 /etc/corosync/corosync.conf
[1] 16:04:27 [SUCCESS] root@devu3.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 24 16:04 /etc/corosync/corosync.conf
[2] 16:04:27 [SUCCESS] root@devu2.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 24 16:04 /etc/corosync/corosync.conf

25e) check with CRM Cluster Manager your Configuration Status in all your nodes

>>> in node1


root@devu1:~# crm configure show
node 1: node1
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=2.1.5-a3f44794f94 \
cluster-infrastructure=corosync \
cluster-name=debian

>>> in node2

root@devu2:~# crm configure show
node 1: node1
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=2.1.5-a3f44794f94 \
cluster-infrastructure=corosync \
cluster-name=debian

>>> in node3

root@devu3:~# crm configure show
node 1: node1
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=2.1.5-a3f44794f94 \
cluster-infrastructure=corosync \
cluster-name=debian

25f) Next delete the Origin Configuration, and let it empty. in all your nodes

- Apply this command in all your nodes

>>> in node1

root@devu1:~# crm configure
crm(live/devu1)configure# erase
crm(live/devu1)configure# show
node 1: node1
crm(live/devu1)configure# commit
crm(live/devu1)configure# quit
bye
root@devu1:~#

>>> in node2

root@devu2:~# crm configure
crm(live/devu1)configure# erase
crm(live/devu1)configure# show
node 1: node1
crm(live/devu1)configure# commit
crm(live/devu1)configure# quit
bye
root@devu2:~#

>>> in node3

root@devu3:~# crm configure
crm(live/devu1)configure# erase
crm(live/devu1)configure# show
node 1: node1
crm(live/devu1)configure# commit
crm(live/devu1)configure# quit
bye
root@devu3:~#

25g) Check again the change, in all your nodes

>>> in node1

root@devu1:~# crm configure show
node 1: node1
root@devu1:~#

>>> in node2

root@devu2:~# crm configure show
node 1: node1
root@devu2:~#

>>> in node3

root@devu3:~# crm configure show
node 1: node1
root@devu3:~#

26) Setup your Cluster !! Pay Attention !! choose the name of your Cluster

root@devu1:~# pcs cluster setup Devuan-Cluster devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any –force

40

26a) Enable your Devuan-Cluster

* root@devu1:~# pcs cluster enable --all*

41

26b) !! Important !! change the name of ring_addr in your corosync.conf from devu1.devucluster.any to 11.11.11.2 see example :

from devu1.devucluster.any to 11.11.11.2

from devu2.devucluster.any to 11.11.11.3

from devu3.devucluster.any to 11.11.11.4

}

nodelist {
node {
ring0_addr: devu1.devucluster.any
name: devu1.devucluster.any
nodeid: 1
}

node {
ring0_addr: devu2.devucluster.any
name: devu2.devucluster.any
nodeid: 2
}

node {
ring0_addr: devu3.devucluster.any
name: devu3.devucluster.any
nodeid: 3
}
}

26c) should be look in this way, first in node1

root@devu1:~# cat /etc/corosync/corosync.conf

42

26d) remove your /etc/corosync/corosync.conf file in node 2 and node 3 and send the your new /etc/corosync/corosync.conf file from node 1 to node2 and node 3

root@devu1:~# parallel-ssh -i -h ~/.dd rm /etc/corosync/corosync.conf
[1] 16:40:21 [SUCCESS] root@devu3.devucluster.any:22
[2] 16:40:21 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# scp /etc/corosync/corosync.conf root@devu2.devucluster.any:/etc/corosync
corosync.conf 100% 661 978.2KB/s 00:00
root@devu1:~# scp /etc/corosync/corosync.conf root@devu3.devucluster.any:/etc/corosync
corosync.conf 100% 661 958.5KB/s 00:00
root@devu1:~#

26e) Reboot all your nodes !!

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 16:41:51 [SUCCESS] root@devu2.devucluster.any:22
[2] 16:41:51 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/0) (Sun Dec 24 16:41:54 2023):

The system is going down for reboot NOW!
root@devu1:~# Connection to devu1.devucluster.any closed by remote host.
Connection to devu1.devucluster.any closed.

27) login in your node1 again:

ssh root@devu1.devucluster.any -p 22

27a ) and Voilà ! There is the first part of your Devuan-Cluster, now, ok continue to the next steps

43

28 ) Configuration the Shared File System Gluster

- First Steps peering your nodes

root@devu1:~# gluster peer probe glus2.gluscluster.glu
peer probe: success
root@devu1:~# gluster peer probe glus3.gluscluster.glu
peer probe: success
root@devu1:~#

44

- Check Peer Status in all your Nodes

root@devu1:~# gluster peer status && parallel-ssh -i -h ~/.dd gluster peer status
Number of Peers: 2

Hostname: glus2.gluscluster.glu
Uuid: 0883688d-7da4-4bb6-8cc0-cbf4b3c7b0fb
State: Peer in Cluster (Connected)

Hostname: glus3.gluscluster.glu
Uuid: 333d983f-fff8-4b01-95a6-3a6592296542
State: Peer in Cluster (Connected)
[1] 17:28:44 [SUCCESS] root@devu2.devucluster.any:22
Number of Peers: 2

Hostname: glus1.gluscluster.glu
Uuid: c79180b6-6eaa-42be-8b61-5d8a443171c8
State: Peer in Cluster (Connected)

Hostname: glus3.gluscluster.glu
Uuid: 333d983f-fff8-4b01-95a6-3a6592296542
State: Peer in Cluster (Connected)
[2] 17:28:44 [SUCCESS] root@devu3.devucluster.any:22
Number of Peers: 2

Hostname: glus1.gluscluster.glu
Uuid: c79180b6-6eaa-42be-8b61-5d8a443171c8
State: Peer in Cluster (Connected)

Hostname: glus2.gluscluster.glu
Uuid: 0883688d-7da4-4bb6-8cc0-cbf4b3c7b0fb
State: Peer in Cluster (Connected)
root@devu1:~#

45

28a) Make a Distributed Folder for your Devuan-Cluster (Gluster) in all Cluster Nodes


root@devu1:~# mkdir -p /gluster-shared && parallel-ssh -i -h ~/.dd mkdir -p /gluster-shared
[1] 17:39:02 [SUCCESS] root@devu2.devucluster.any:22
[2] 17:39:02 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~#

28b) Check out if all is ok

root@devu1:~# ls -lsa /gluster-shared && parallel-ssh -i -h ~/.dd ls -lsa /gluster-shared
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:39 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:39 ..
[1] 17:39:53 [SUCCESS] root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:38 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:38 ..
[2] 17:39:53 [SUCCESS] root@devu2.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 17:39 .
4 drwxr-xr-x 23 root root 4096 Dec 24 17:39 ..
root@devu1:~#

46

28c) Create now your Shared Glusterfs for all your nodes. !! Important Choose a name for your Volume !!

root@devu1:~# gluster volume create devuan-gluster replica 3 transport tcp glus1.gluscluster.glu:/gluster-shared glus2.gluscluster.glu:/gluster-shared glus3.gluscluster.glu:/gluster-shared force

28d) Start your Gluster Volume ( devuan-gluster )

root@devu1:~# gluster volume start devuan-gluster

28e) Check if all is ok

root@devu1:~# gluster volume info devuan-gluster && parallel-ssh -i -h ~/.dd gluster volume info devuan-gluster

47

29) Make a mount Folder for your Gluster Shared Volume in all Cluster Nodes (your decide where will be mount)

root@devu1:~# mkdir -p /mnt/devuan-gluster && parallel-ssh -i -h ~/.dd mkdir -p /mnt/devuan-gluster
[1] 18:13:45 [SUCCESS] root@devu3.devucluster.any:22
[2] 18:13:45 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~#

29a) Check out if all is ok

root@devu1:~# ls -lsa /mnt/devuan-gluster && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 18:13 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
[1] 18:14:42 [SUCCESS] root@devu2.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 18:13 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
[2] 18:14:42 [SUCCESS] root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 24 18:13 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
root@devu1:~#

48

30) Mount your Gluster Shared Volume in all your Nodes >>> !! Important, I have decided mount the Gluster Shared Volumes in this way (maybe it is wrong ) >>> Volume of node3 mount in node1, Volume of node2 to node3, and Volume to node1 to node2

>>> in node1

root@devu1:~# mount -t glusterfs glus3.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster

- check if all is ok

root@devu1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 824K 1.6G 1% /run
/dev/mapper/devu1--vg-root 233G 3.9G 217G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
glus3.gluscluster.glu:/devuan-gluster 227G 6.2G 212G 3% /mnt/devuan-gluster
root@devu1:~#


- login from node1 to node2

root@devu1:~# ssh devu2.devucluster.any
Linux devu2 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Dec 24 17:30:15 2023 from 22.220.200.2
root@devu2:~#

- now mount Gluster Shared Volume

root@devu2:~# mount -t glusterfs glus1.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster

- check it out

root@devu2:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 748K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 3.9G 212G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
glus1.gluscluster.glu:/devuan-gluster 227G 6.2G 212G 3% /mnt/devuan-gluster
root@devu2:~#

root@devu2:~# exit
logout

- login from node1 to node3

root@devu1:~# ssh devu3.devucluster.any
Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Dec 24 17:31:18 2023 from 22.220.200.2
root@devu3:~#

- now mount Gluster Shared Volume

root@devu3:~# mount -t glusterfs glus2.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster

- check it out

root@devu3:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 744K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 3.9G 217G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
glus2.gluscluster.glu:/devuan-gluster 227G 6.2G 212G 3% /mnt/devuan-gluster
root@devu3:~#

30a) try to edit >> /etc/fstab to mount on boot your Gluster Shared File System, but maybe it would not run. Maybe I edited something wrong

>> in node1

root@devu1:~# nano /etc/fstab

glus3.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster glusterfs defaults,_netdev 0 0

>> in node2

root@devu2:~# nano /etc/fstab

glus1.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster glusterfs defaults,_netdev 0 0

>> in node3

root@devu2:~# nano /etc/fstab

glus2.gluscluster.glu:/devuan-gluster /mnt/devuan-gluster glusterfs defaults,_netdev 0 0

########################## >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<< ####################################

31) Check out a little detail with netstat -tulpn in all Cluster Nodes >> glusterfsd should be run >>

49

32) Now make Replicant Test File for all your Nodes.


root@devu1:~# touch /mnt/devuan-gluster/Super_Secret_Devuan_Files

- Check out it

root@devu1:~# ls -lsa /mnt/devuan-gluster && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster
total 8
4 drwxr-xr-x 4 root root 4096 Dec 24 19:19 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
0 -rw-r--r-- 1 root root 0 Dec 24 19:19 Super_Secret_Devuan_Files
[1] 19:21:24 [SUCCESS] root@devu2.devucluster.any:22
total 8
4 drwxr-xr-x 4 root root 4096 Dec 24 19:19 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
0 -rw-r--r-- 1 root root 0 Dec 24 19:19 Super_Secret_Devuan_Files
[2] 19:21:24 [SUCCESS] root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 4 root root 4096 Dec 24 19:19 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
0 -rw-r--r-- 1 root root 0 Dec 24 19:19 Super_Secret_Devuan_Files
root@devu1:~#

50

33) Voilà ! The Shared File System with Glusterfs for your Devuan-Cluster is Ready, now, ok continue to the next steps

34) Configuration of Galera Cluster Nodes with Devuan.

-First disable in the default runlevel Mariadb in all nodes (because unfortunately there is problem if Galera Cluster try to start on Boot ) >> rc-update del mariadb

root@devu1:~# rc-update del mariadb && parallel-ssh -i -h ~/.dd rc-update del mariadb
* service mariadb removed from runlevel default
[1] 11:44:31 [SUCCESS] root@devu2.devucluster.any:22
* service mariadb removed from runlevel default
[2] 11:44:31 [SUCCESS] root@devu3.devucluster.any:22
* service mariadb removed from runlevel default
root@devu1:~#

-now continue to Galera Cluster Configuration apply this process in all your nodes >>> with this command >>> root@devu1:~# mysql_secure_installation.

root@devu1:~# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.

Enter current password for root (enter for none): **********
OK, successfully used password, moving on...

Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.

You already have your root account protected, so you can safely answer 'n'.

Switch to unix_socket authentication [Y/n] y
Enabled successfully!
Reloading privilege tables..
... Success!


You already have your root account protected, so you can safely answer 'n'.

Change the root password? [Y/n] y
New password: ***********
Re-enter new password: ************

Password updated successfully!
Reloading privilege tables..
... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
root@devu1:~#

51

-now stop mariadb in all Cluster Nodes

root@devu1:~# rc-service mariadb stop && parallel-ssh -i -h ~/.dd rc-service mariadb stop
Stopping MariaDB database server: mariadbd.
[1] 13:52:06 [SUCCESS] root@devu3.devucluster.any:22
Stopping MariaDB database server: mariadbd.
[2] 13:52:06 [SUCCESS] root@devu2.devucluster.any:22
Stopping MariaDB database server: mariadbd.
root@devu1:~#

- now change a few file for mariadb (Galera Cluster) in all Cluster Nodes, first make copy of original file 50-server.cnf and 60-galera.cnf then delete those files and edit new Files for 50-server.cnf and 60-galera.cnf.

root@devu1:~# cp /etc/mysql/mariadb.conf.d/50-server.cnf /etc/mysql/mariadb.conf.d/50-server.cnf.old && parallel-ssh -i -h ~/.dd cp /etc/mysql/mariadb.conf.d/50-server
.cnf /etc/mysql/mariadb.conf.d/50-server.cnf.old
[1] 12:37:17 [SUCCESS] root@devu3.devucluster.any:22
[2] 12:37:17 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# cp /etc/mysql/mariadb.conf.d/60-galera.cnf /etc/mysql/mariadb.conf.d/60-galera.cnf.old && parallel-ssh -i -h ~/.dd cp /etc/mysql/mariadb.conf.d/60-galera
.cnf /etc/mysql/mariadb.conf.d/60-galera.cnf.old
[1] 12:42:24 [SUCCESS] root@devu2.devucluster.any:22
[2] 12:42:24 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# rm /etc/mysql/mariadb.conf.d/50-server.cnf && parallel-ssh -i -h ~/.dd rm /etc/mysql/mariadb.conf.d/50-server.cnf
[1] 12:44:07 [SUCCESS] root@devu2.devucluster.any:22
[2] 12:44:07 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# rm /etc/mysql/mariadb.conf.d/60-galera.cnf && parallel-ssh -i -h ~/.dd rm /etc/mysql/mariadb.conf.d/60-galera.cnf
[1] 12:44:55 [SUCCESS] root@devu2.devucluster.any:22
[2] 12:44:55 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~#

52

- Edit the new files for 50-server.cnf and 60-galera.cnf.

>>> for root@devu1:~# nano /etc/mysql/mariadb.conf.d/50-server.cnf this example:

################################################################################################################################

#

# These groups are read by MariaDB server.

# Use it for options that only the server (but not clients) should see

# this is read by the standalone daemon and embedded servers

[server]

# this is only for the mariadbd daemon

[mariadbd]

#

# * Basic Settings

#

user = mysql

#pid-file = /run/mysqld/mysqld.sock

#pid-file = /usr/bin/mariadb

basedir = /usr

datadir = /var/lib/mysql

#socket = /var/lib/mysql/mysql.pid

socket = /run/mysqld/mysqld.sock

#tmpdir = /tmp

# Broken reverse DNS slows down connections considerably and name resolve is

# safe to skip if there are no "host by domain name" access grants

#skip-name-resolve

# Instead of skip-networking the default is now to listen only on

# localhost which is more compatible and is not less secure.

#bind-address = 127.0.0.1

bind-address = 0.0.0.0

#

# * Fine Tuning

#

#key_buffer_size = 128M

#max_allowed_packet = 1G

#thread_stack = 192K

#thread_cache_size = 8

# This replaces the startup script and checks MyISAM tables if needed

# the first time they are touched

#myisam_recover_options = BACKUP

#max_connections = 100

#table_cache = 64

#

# * Logging and Replication

#

# Both location gets rotated by the cronjob.

# Be aware that this log type is a performance killer.

# Recommend only changing this at runtime for short testing periods if needed!

#general_log_file = /var/log/mysql/mysql.log

#general_log = 1

# Error logging goes via stdout/stderr, which on systemd systems goes to

# journald.

# Enable this if you want to have error logging into a separate file

#log_error = /var/log/mysql/error.log

# Enable the slow query log to see queries with especially long duration

#log_slow_query_file = /var/log/mysql/mariadb-slow.log

#log_slow_query_time = 10

#log_slow_verbosity = query_plan,explain

#log-queries-not-using-indexes

#log_slow_min_examined_row_limit = 1000

# The following can be used as easy to replay backup logs or for replication.

# note: if you are setting up a replica, see README.Debian about other

# settings you may need to change.

#server-id = 1

#log_bin = /var/log/mysql/mysql-bin.log

expire_logs_days = 10

#max_binlog_size = 100M

#

# * SSL/TLS

#

# For documentation, please read

# https://mariadb.com/kb/en/securing-connections-for-client-and-server/

#ssl-ca = /etc/mysql/cacert.pem

#ssl-cert = /etc/mysql/server-cert.pem

#ssl-key = /etc/mysql/server-key.pem

#require-secure-transport = on

#

# * Character sets

#

# MySQL /MariaDB default is Latin1, but in Debian we rather default to the full

# utf8 4-byte character set. See also client.cnf

character-set-server = utf8mb4

collation-server = utf8mb4_general_ci

#

# * InnoDB

#

default_storage_engine=InnoDB

innodb_autoinc_lock_mode=2

innodb_flush_log_at_trx_commit=0

innodb_buffer_pool_size=128M

binlog_format=ROW

log-error=/var/log/mysqld.log

# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.

# Read the manual for more InnoDB related options. There are many!

# Most important is to give InnoDB 80 % of the system RAM for buffer use:

# https://mariadb.com/kb/en/innodb-system-variables/#innodb_buffer_pool_size

#innodb_buffer_pool_size = 8G

# this is only for embedded server

[embedded]

# This group is only read by MariaDB servers, not by MySQL.

# If you use the same .cnf file for MySQL and MariaDB,

# you can put MariaDB -only options here

[mariadbd]

# This group is only read by MariaDB -11.2 servers.

# If you use the same .cnf file for MariaDB of different versions,

# use this group for options that older servers don't understand

[mariadb-11.2]

################################################################################################################################

!! Important, Pay Attention for name of each nodes a the name of the Cluster for Galera Cluster, then names of the nodes, the name of the Galera Cluster and the IP ADDRESS, should be change by each node !!

Name of Galere Cluster = Devuan-Galera

node1 = gale1 // node2 = gale2 // node3 = gale3

for node1 = wsrep_node_address="22.22.22.2"

for node2 = wsrep_node_address="22.22.22.3"

for node3 = wsrep_node_address="22.22.22.4"

>>> for root@devu1:~# nano /etc/mysql/mariadb.conf.d/60-galera.cnf this example :

################################################################################################################################


#
# * Galera-related settings
#
# See the examples of server wsrep.cnf files in /usr/share/mariadb
# and read more at https://mariadb.com/kb/en/galera-cluster/

[galera]
# Mandatory settings
wsrep_on = ON
wsrep_cluster_name = "Devuan-Galera"
wsrep_node_name='gale1'
wsrep_node_address="22.22.22.2"
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://22.22.22.2,22.22.22.3,22.22.22.4"
wsrep_provider_options="gcache.size=300M;gcache.page_size=300M"

#binlog_format = row
#default_storage_engine = InnoDB
#innodb_autoinc_lock_mode = 2

# Allow server to accept connections on all interfaces.
#bind-address = 0.0.0.0

# Optional settings
wsrep_slave_threads = 3
wsrep_sst_method=rsync
#innodb_flush_log_at_trx_commit = 0

################################################################################################################################

- now send those files ( 50-server.cnf and 60-galera.cnf ) from node1 to node2 and node3

root@devu1:~# scp /etc/mysql/mariadb.conf.d/50-server.cnf root@devu2.devucluster.any:/etc/mysql/mariadb.conf.d/
50-server.cnf 100% 3745 4.5MB/s 00:00
root@devu1:~# scp /etc/mysql/mariadb.conf.d/50-server.cnf root@devu3.devucluster.any:/etc/mysql/mariadb.conf.d/
50-server.cnf 100% 3745 4.4MB/s 00:00
root@devu1:~# scp /etc/mysql/mariadb.conf.d/60-galera.cnf root@devu2.devucluster.any:/etc/mysql/mariadb.conf.d/
60-galera.cnf 100% 783 1.0MB/s 00:00
root@devu1:~# scp /etc/mysql/mariadb.conf.d/60-galera.cnf root@devu3.devucluster.any:/etc/mysql/mariadb.conf.d/
60-galera.cnf 100% 783 1.0MB/s 00:00
root@devu1:~#

- now check if all is ok in all Cluster Nodes.

root@devu1:~# ls -lsa /etc/mysql/mariadb.conf.d/ && parallel-ssh -i -h ~/.dd ls -lsa /etc/mysql/mariadb.conf.d/

53

- continue check if all is ok in all Cluster Nodes, now with cat /etc/mysql/mariadb.conf.d/… for example :

root@devu1:~# parallel-ssh -i -h ~/.dd cat /etc/mysql/mariadb.conf.d/50-server.cnf

54

root@devu1:~# parallel-ssh -i -h ~/.dd cat /etc/mysql/mariadb.conf.d/60-galera.cnf

55

!! Important don’t forget, Change the nodes names and the IP ADDRESS of each nodes in the 60-galera.cnf file in node2 and node 3 !!

root@devu1:~# ssh devu2.devucluster.any
Linux devu2 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 25 12:00:59 2023 from 22.220.200.2
root@devu2:~# nano /etc/mysql/mariadb.conf.d/60-galera.cnf

56

root@devu1:~# ssh devu3.devucluster.any
Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 25 12:02:11 2023 from 22.220.200.2
root@devu3:~# nano /etc/mysql/mariadb.conf.d/60-galera.cnf

57

35) Start your Galera Cluster in your Devuan Server for!! First time !!.

!! in the Next Steps you must to create this file >> /var/log/mysqld.log >> in all Cluster Nodes but not now, will be good that your see first the Verbose output only for the first time. !!

!! Please open a new tab for your console terminal and continue to login in your node1 = ssh root@devu1.devuclsuter.any -p 22

58

!! Important, to start your Galera Cluster happen something rare that I can not explain why happen it, due to my limited knowledge. You must apply this command first >> mariadbd –wsrep-new-cluster >> then you must wait 40 sec approximataly and you will have a output error, but all is ok, then you must apply this another command >> rc-service mariadb bootstrap >> but the output say mariadb is already running, now you must to stop your mariadb service >> rc-service mariadb stop >> and then again the first command >> mariadbd –wsrep-new-cluster, and now you must to run mariadb in your node2 and node3 !!

!! PLEASE SEE THE SCREENSHOTS TO UNDERSTAND !!

- first command result

root@devu1:~# mariadbd –wsrep-new-cluster

59

- second command result

root@devu1:~# rc-service mariadb bootstrap

60

- third command result

root@devu1:~# rc-service mariadb stop

- and finally fourth command result

root@devu1:~# mariadbd –wsrep-new-cluster

61

36) now start your mariadb service in node2 and node3, and next check your Galera-Cluster

__ >> in node2 __

root@devu2:~# rc-service mariadb start

>> in node3

root@devu2:~# rc-service mariadb start

37) now check in the another tab from your node1, if the 3 node in Galera-Cluster is running

root@devu1:~# mysql -p -u root

>>> into mysql apply this command >>>SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';

62

38) create in all Cluster node this file /var/log/mysqld.log and check if all is ok

root@devu1:~# touch /var/log/mysqld.log && parallel-ssh -i -h ~/.dd touch /var/log/mysqld.log
[1] 16:28:17 [SUCCESS] root@devu3.devucluster.any:22
[2] 16:28:17 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# ls -lsa /var/log/mysqld.log && parallel-ssh -i -h ~/.dd ls -lsa /var/log/mysqld.log
0 -rw-r--r-- 1 root root 0 Dec 25 16:28 /var/log/mysqld.log
[1] 16:28:39 [SUCCESS] root@devu2.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 25 16:28 /var/log/mysqld.log
[2] 16:28:39 [SUCCESS] root@devu3.devucluster.any:22
0 -rw-r--r-- 1 root root 0 Dec 25 16:28 /var/log/mysqld.log
root@devu1:~#


39 ) now reboot all your nodes, remember Galera-Cluster will be not start on boot.

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 16:30:04 [SUCCESS] root@devu2.devucluster.any:22
[2] 16:30:04 [SUCCESS] root@devu3.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/1) (Mon Dec 25 16:30:06 2023):

The system is going down for reboot NOW!
root@devu1:~#


40 ) login again in your node1

ssh root@devu1.devuclsuter.any -p 22

40a) run again your Galera-Cluster, but now manually

- check first this output file >> cat var/lib/mysql/grastate.dat >> in all Cluster Nodes

root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat

- check the output and compare the result for example: if you want start again Galera-Cluster in your node1, must have the result >>> safe_to_bootstrap: 1 >> in the file >>> var/lib/mysql/grastate.dat

63

40b) run Galera in node1 again but with this command >>> rc-service mariadb bootstrap

root@devu1:~# rc-service mariadb bootstrap
Bootstrapping the cluster: mariadbdStarting MariaDB database server: mariadbd.
root@devu1:~#


40c ) then in node2 and node3

>>> in node2

Linux devu2 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 25 15:23:40 2023 from 22.220.200.2
root@devu2:~# rc-service mariadb start
Starting MariaDB database server: mariadbd . . ..
root@devu2:~#

>>> in node3

Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 25 15:24:43 2023 from 22.220.200.2
root@devu3:~# rc-service mariadb start
Starting MariaDB database server: mariadbd . . ..
root@devu3:~#

40d) check in your node1 if all 3 nodes run in the Galera-Cluster

root@devu1:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 40
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]>

41)Voilà ! The Galera-Cluster in your Devuan-Cluster is Ready. Now continue

42) if you want to check the replicant in the Galera-Cluster you can to create a DATABASE and then, Check it.

>>> in node1 for exmaple: MariaDB [(none)]> CREATE DATABASE devuan_galera_test;

--- check out in node3 for example

root@devu1:~# ssh devu3.devucluster.any
Linux devu3 6.1.0-16-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12) x86_64

The programs included with the Devuan GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Devuan GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 25 16:41:52 2023 from 22.220.200.2
root@devu3:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 40
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW DATABASES;

64

43) Add resource (APACHE WEB SERVER ) and IP Virtual to HA Devuan-Cluster and Install WEBMIN. The 22.220.200.73 is taked for IP Virtual it mean, don’t use this IP ADDRESS for other host in your local area network.

- check cluster status

root@devu1:~# crm status

- edit configuration

root@devu1:~# crm configure
crm(live/devu1)configure# property stonith-enabled=no
crm(live/devu1)configure# property no-quorum-policy=ignore
crm(live/devu1)configure# primitive IP-apache ocf:heartbeat:IPaddr2 \
> params ip="22.220.200.73" nic="eth0" cidr_netmask="24" \
> meta migration-threshold=2 \
> op monitor interval=15 timeout=50 on-fail=restart
crm(live/devu1)configure# primitive apache-rsc ocf:heartbeat:apache \
> meta migration-threshold=2 \
> op monitor interval=15 timeout=50 on-fail=restart
crm(live/devu1)configure# colocation lb-loc inf: IP-apache apache-rsc
crm(live/devu1)configure# order lb-ord inf: IP-apache apache-rsc
crm(live/devu1)configure# commit

crm(live/devu1)configure# quit

- after commit you will have this output, >>> WARNING: (unpack_config) warning: Blind faith: not fencing unseen nodes >>> but continue to make a new fencing

root@devu1:~# crm
crm(live/devu1)# cib new fencing
INFO: cib.new: fencing shadow CIB created

crm(live/devu1)# quit

- check if all is ok

root@devu1:~# crm configure
crm(live/devu1)configure# show
node 1: devu1.devucluster.any
node 2: devu2.devucluster.any
node 3: devu3.devucluster.any
primitive IP-apache IPaddr2 \
params ip=22.220.200.73 nic=eth0 cidr_netmask=24 \
meta migration-threshold=2 \
op monitor interval=15 timeout=50 on-fail=restart
primitive apache-rsc apache \
meta migration-threshold=2 \
op monitor interval=15 timeout=50 on-fail=restart
colocation lb-loc inf: IP-apache apache-rsc
order lb-ord Mandatory: IP-apache apache-rsc
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=2.1.5-a3f44794f94 \
cluster-infrastructure=corosync \
cluster-name=Devuan-Cluster \
stonith-enabled=no \
no-quorum-policy=ignore
crm(live/devu1)configure# quit

- check it with crm status

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-25 17:42:47 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Mon Dec 25 17:42:47 2023
* Last change: Mon Dec 25 17:31:05 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu3.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu3.devucluster.any

root@devu1:~#

44) Install WEBMIN can you visit >> https://webmin.com/download/

- Install CURL .deb in all Cluster Nodes >> apt install curl -y

- add Webmin Repositories, apply this process in all your Cluster Nodes

root@devu1:~# curl -o setup-repos.sh https://raw.githubusercontent.com/webmin/webmin/master/setup-repos.sh
sh setup-repos.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5268 100 5268 0 0 1016 0 0:00:05 0:00:05 --:--:-- 1250
Setup repository? (y/N) y
Downloading Webmin key ..
.. done
Installing Webmin key ..
.. done
Setting up Webmin repository ..
.. done
Cleaning repository metadata ..
.. done
Downloading repository metadata ..
.. done
Webmin package can now be installed using
apt-get install --install-recommends webmin command.
root@devu1:~#

- Now install WEBMIN in your node1

root@devu1:~# apt-get install --install-recommends webmin -y

>> in node2

root@devu2:~# apt-get install --install-recommends webmin -y

>> in node3

root@devu3:~# apt-get install --install-recommends webmin -y

45) would be good if your check your CRM STATUS again in node1, only to compare in the next steps

46) reboot all your nodes, but remember Galera-Cluster not start on boot and GlusterFS not mount the Shared File System on boot.

root@devu1:~# parallel-ssh -i -h ~/.dd reboot
[1] 18:00:02 [SUCCESS] root@devu3.devucluster.any:22
[2] 18:00:02 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# reboot

Broadcast message from root@devu1 (pts/0) (Mon Dec 25 18:00:04 2023):

The system is going down for reboot NOW!
root@devu1:~# Connection to devu1.devucluster.any closed by remote host.
Connection to devu1.devucluster.any closed.

47) Check now where start your IP Virtual and your Apache resource in the Devaun-Cluster

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-25 18:02:12 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Mon Dec 25 18:02:12 2023
* Last change: Mon Dec 25 17:31:04 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu2.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu2.devucluster.any

root@devu1:~#

48) Check with >> netstat -tulpn >>> if WEBMIN is runnig, if not running, then add WEBMIN to the default runlevel.

root@devu1:~# netstat -tulpn && parallel-ssh -i -h ~/.dd netstat -tulpn

65

49) add WEBMIN to the default runlevel


root@devu1:~# rc-update add webmin && parallel-ssh -i -h ~/.dd rc-update add webmin
* service webmin added to runlevel default
[1] 18:12:32 [SUCCESS] root@devu2.devucluster.any:22
* service webmin added to runlevel default
[2] 18:12:32 [SUCCESS] root@devu3.devucluster.any:22
* service webmin added to runlevel default
root@devu1:~#

50 ) I recommend to reboot again all your Cluster Nodes.


root@devu1:~# rc-update add webmin && parallel-ssh -i -h ~/.dd rc-update add webmin
* service webmin added to runlevel default
[1] 18:12:32 [SUCCESS] root@devu2.devucluster.any:22
* service webmin added to runlevel default
[2] 18:12:32 [SUCCESS] root@devu3.devucluster.any:22
* service webmin added to runlevel default
root@devu1:~#

%ENDCOLOR%51) check again if WEBMIN is running >> netstat -tulpn

root@devu1:~# netstat -tulpn && parallel-ssh -i -h ~/.dd netstat -tulpn

66

52 ) login with the IP Virtual of the Devuan-Cluster to test if it runnig, use the port 10000 >> example >>https://22.220.200.73:10000

67

53)Voilà ! The WEBMIN GUI MANAGER SERVER is running in your Devuan-Cluster . Now continue

68

54) Now Test your HA Devuan-Cluster >>> and shutdown the node where is run the apache resource for WEBMIN

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-25 18:26:18 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu3.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Mon Dec 25 18:26:19 2023
* Last change: Mon Dec 25 18:01:18 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu3.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu3.devucluster.any

root@devu1:~#

- the Apache resource is running in node3, shutdown the node3 and check again in node1 with >> crm status

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-25 18:28:44 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Mon Dec 25 18:28:45 2023
* Last change: Mon Dec 25 18:01:18 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any ]
*
OFFLINE : [ devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu1.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu1.devucluster.any

root@devu1:~#


- now you can see the Apache resource for WEBMIN is running now in node1, test now how solid is your Devuan-Cluster,

Login you again with the IP Virtual Address https://22.220.200.73:10000 in your Browser. ( you must wait a few seconds )

69

55) CONGRATULATIONS !!! Enjoy your WEBMIN GUI in Devuan-Cluster >> login with ROOT and your root password *******

Dont forget turn on again your node3 :-)

70

56 ) turn on again your node3

- check if all node they are online

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-25 18:41:39 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Mon Dec 25 18:41:39 2023
* Last change: Mon Dec 25 18:01:18 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu3.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu3.devucluster.any

root@devu1:~#

- you can see your Apache resource for WEBMIN return again to node3, it mean, would be good refresh your IP VIRTUAL in your Browser and login again. ( you can use too for example https://devu3.devucluster.any:10000/ )

57) Connect to each other the Cluster Nodes in WEBMIN.

- click on Webmin >> go to Webmin Server Index >> type the LAN range where are the Cluster Nodes 22.220.200.0 then type user = root and password = ********* >> click on Scan

71

57a) check if the result is right and click on Return to servers

72

57b) it should look something like this, normally you see only 2 server because you are logged into one of the node before. (I recommend add all your nodes)

73

57c) make the same process in you another Cluster Nodes, for Example if the virtual ip resource for Apache run in node1, you must make this process first in node1 and continue with the same process for Server Webmin Server Index in node2 and node3.

58) Make a Webmin Cluster for all Cluster Nodes

- first return to the origin node where run actually the IP Virtual and Apache resource of your Cluster.
- click on Cluster >> next click on Cluster Webmin Servers (add all Cluster Nodes, to make correctly a Webmin Cluster)

74

59) show the result

75

###########################################################################################################################################################################

76

60) contiunue with the others nodes, you should make this process in all Cluster Nodes.

61) Perhaps, due to a failure in my configuration or my limited knowledge, Remember GlusterFS not moutn Gluster File System automatic on boot for this Cluster, but you can mount GlusterFS from WEBMIN.

- Click on >> next on Disk and Network Filesystems >> next Click in Show All File Systems >> search you GLUSTERFS file system in Use? and Click on NO to toggle to YES

77

- Toggle to yes ( to mount glustefs )

78

62) make the same process in all Cluster Node to mount the Replicant GlusterFS (node1 , node2, node3 )

63) now make a Folder for your Virtual Machine into the Replicant GlusterFS folder >> /mnt/devuan-gluster >> (apply this in all Cluster Nodes)

root@devu1:~# mkdir -p /mnt/devuan-gluster/qemu-kvm && parallel-ssh -i -h ~/.dd mkdir -p /mnt/devuan-gluster/qemu-kvm
[1] 13:39:13 [SUCCESS] root@devu3.devucluster.any:22
[2] 13:39:13 [SUCCESS] root@devu2.devucluster.any:22
root@devu1:~# ls -lsa /mnt/devuan-gluster/qemu-kvm && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster/qemu-kvm
total 8
4 drwxr-xr-x 2 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 ..
[1] 13:40:44 [SUCCESS] root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 3 root root 4096 Dec 26 13:39 ..
[2] 13:40:44 [SUCCESS] root@devu2.devucluster.any:22
total 8
4 drwxr-xr-x 2 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 ..
root@devu1:~#

79

63a) change the owner and group for this folder to prepare this folder for QEMU-KVM.

root@devu1:~# chown libvirt-qemu:libvirt-qemu -R /mnt/devuan-gluster/qemu-kvm && parallel-ssh -i -h ~/.dd chown libvirt-qemu:libvirt-qemu -R /mnt/devuan-gluster/qemu-
kvm
[1] 13:51:30 [SUCCESS] root@devu3.devucluster.any:22
[2] 13:51:30 [SUCCESS] root@devu2.devucluster.any:22

63b) check if all is ok

root@devu1:~# ls -lsa /mnt/devuan-gluster/ && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster/
total 12
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 qemu-kvm
0 -rw-r--r-- 1 root root 0 Dec 24 19:19 Super_Secret_Devuan_Files
[1] 13:53:08 [SUCCESS] root@devu3.devucluster.any:22
total 12
4 drwxr-xr-x 3 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 qemu-kvm
[2] 13:53:08 [SUCCESS] root@devu2.devucluster.any:22
total 12
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 .
4 drwxr-xr-x 3 root root 4096 Dec 24 18:13 ..
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 qemu-kvm
0 -rw-r--r-- 1 root root 0 Dec 24 19:19 Super_Secret_Devuan_Files
root@devu1:~#

64) Download a .ISO Image to install a Virtual Machine and after upload this .ISO image in your GLUSTERFS folder, or download this .ISO image directly in your GLUSTERFS folder with the command >> wget.

- would be better if you stay into >> /mnt/devuan/qemu-kvm/ >>

root@devu1:~# cd /mnt/devuan-gluster/qemu-kvm
root@devu1:/mnt/devuan-gluster/qemu-kvm# wget
https://ftp.nluug.nl/pub/os/Linux/distr/devuan/devuan_daedalus/installer-iso/devuan_daedalus_5.0.1_amd64_server.iso

- !! very Important !! to be sure Change the owner of the .ISO image ot libvirt-qemu

root@devu1:/mnt/devuan-gluster/qemu-kvm# chown libvirt-qemu:libvirt-qemu devuan_daedalus_5.0.1_amd64_server.iso

- check if all is ok

root@devu1:/mnt/devuan-gluster/qemu-kvm# ls -lsa /mnt/devuan-gluster/qemu-kvm && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster/qemu-kvm
total 778056
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 14:04 .
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 ..
778048 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 796721152 Sep 14 10:22 devuan_daedalus_5.0.1_amd64_server.iso
[1] 14:45:05 [SUCCESS] root@devu3.devucluster.any:22
total 8
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 .
4 drwxr-xr-x 3 root root 4096 Dec 26 13:39 ..
[2] 14:45:05 [SUCCESS] root@devu2.devucluster.any:22
total 778056
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 14:04 .
4 drwxr-xr-x 5 root root 4096 Dec 26 13:39 ..
778048 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 796721152 Sep 14 10:22 devuan_daedalus_5.0.1_amd64_server.iso

80

65) on the Side of Administrator Laptop or PC connect with the node where you downloaded the .ISO image for your Virtual Machine.

- Open your Virt-Manager ( Ignore a Error messager, about libvirtd, because you are not a Server )

81

- Add a New Connection for node1

- type first YES and after type your password ******* (like a normal ssh connection)

- click on your devu1.devucluster.any connection, next make a new storage for QEMU, click on EDIT, next click on CONNECTION DETAIL , next click on STORAGE then add new Storage Pool !! pay attention !! the target path should be where is your glusterfs folder for qemu-kvm. Next click on finnish.

- you must create a new Pool Storage for all nodes connection , it meant, make this process in all Admin connection for the Cluster nodes

- - at the moment the interface for all virtual machine is ETH0 but it will be changed to ETH3

82

- make sure that you are still select the qemu connection for node1, now click in FILE, next Create a Virual Machine with the .ISO image in your Glusterfs Shared Folder in node1.

83

- Continue to Create your Devuan-Test Virtual Machine and be patient

84

###############################################################################################################################

85

###############################################################################################################################

86

###############################################################################################################################

87

###############################################################################################################################

88

###############################################################################################################################

89

###############################################################################################################################

- at the moment the interface for all virtual machine is ETH0 but it will be changed to ETH3

90

###############################################################################################################################

91

###############################################################################################################################

!!! Sorry but here there is a Problem that I don’t know why it happen, when you run your Devuan-Test VM on this way, you must type and login 6 time with the user and password... it is really Crazy !!!

92

###############################################################################################################################

93

###############################################################################################################################

66) Voilà ! your Devuan-Cluster support QEMU/KVM for Virtual Machine in all your Cluster Nodes with GlusterFS.

94

67) Check now in all Cluster Nodes if you have a Clone of the Devaun-Test VM, into the Replicant Gluster Shared File System

root@devu1:~# ls -lsa /mnt/devuan-gluster/qemu-kvm && parallel-ssh -i -h ~/.dd ls -lsa /mnt/devuan-gluster/qemu-kvm
total 32235336
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 14:59 .
4 drwxr-xr-x 5 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 ..
778048 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 796721152 Sep 14 10:22 devuan_daedalus_5.0.1_amd64_server.iso
31457280 -rw------- 1 libvirt-qemu libvirt-qemu 32212254720 Dec 26 15:48 devuan-test.img
[1] 15:52:51 [SUCCESS] root@devu3.devucluster.any:22
total 32235336
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 14:59 .
4 drwxr-xr-x 5 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 ..
778048 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 796721152 Sep 14 10:22 devuan_daedalus_5.0.1_amd64_server.iso
31457280 -rw------- 1 libvirt-qemu libvirt-qemu 32212254720 Dec 26 15:48 devuan-test.img
[2] 15:52:51 [SUCCESS] root@devu2.devucluster.any:22
total 32235336
4 drwxr-xr-x 2 libvirt-qemu libvirt-qemu 4096 Dec 26 14:59 .
4 drwxr-xr-x 5 libvirt-qemu libvirt-qemu 4096 Dec 26 13:39 ..
778048 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 796721152 Sep 14 10:22 devuan_daedalus_5.0.1_amd64_server.iso
31457280 -rw------- 1 libvirt-qemu libvirt-qemu 32212254720 Dec 26 15:48 devuan-test.img
root@devu1:~#

95

68) Unfortunately, due to my limited knowledge, a HA Cluster for QEMU/KVM with Virtual Motion between the Cluster Node, was not reached, it mean, if one of the node fail out, you must to run manually one of the clone of your Devaun-Test VM !!!! Important don’t run at same Time the same Virual Disk for other Virtual Machine, into others Node, it will fail or Crash Complete one of the VM !! see please the Screenshot better to understand

69) make new Connection on the Admin Side Device (laptop or PC) for the others nodes

96

70) now make new Virtual Machine into your differents administration connection with the same Virtual Disk before was created in node1 in the folder >> /mnt/devuan-gluster/qemu-kvm >> with the same disk >> devuan-test.img.

If you want that the other Replicant or Clone of the Devuan-Test VM have the same IP ADDRESS when your turn on in case that one of the nodes fail out, then you must first enable the EDIT feature in your QEMU/KVM Connection in all Cluster nodes and continue to copy and paste the virtual mac address of the original VM in your node1. ( see screenshot )

97

################################################################################################################################

- in node1

99

- now you will see the same virtual mac addres into the other virtual machine on the others Cluster Nodes.

__ - in node2__

100

-in node 3

101

################################################################################################################################

################################################################################################################################

################################################################################################################################

Now Test this Cluster Complete with All Service and Features, and Simulation a Fail Situation

71) Power off and power on your Cluster Nodes to Check in which status stay your Cluster Nodes.

72) Check first all service und features.

73) now the Cluster Nodes they are On again >> login in your node1

- ssh root@devu1.devucluster.any -p 22

- check status of your Devuan-Cluster

root@devu1:~# crm status

- check that all your service run in all Cluster Nodes with >> rc-status >> and >> netstat -tulpn

root@devu1:~# rc-status && parallel-ssh -i -h ~/.dd rc-status

- now with netstat -tulpn

root@devu1:~# netstat -tulpn && parallel-ssh -i -h ~/.dd netstat -tulpn

- is GlusterFS and WEBMIN runnig ? Then at the moment is all ok..

continue to mount your Shared File System GlusterFS in all Cluster Nodes( remember you can mount this into WEBMIN you can use the IP Virtual for more precision https://22.220.200.73:10000/)_

- first check if one GlusterFS was mounted on boot

root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h

- if not ( continue to mount in all Cluster Nodes )

- you are finnish with GlusterFS moutn, then Check if all is ok again

root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 824K 1.6G 1% /run
/dev/mapper/devu1--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
glus3.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
[1] 18:12:16 [SUCCESS] root@devu2.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 736K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 18:12:16 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 736K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@devu1:~#

- now run your Galera-Cluster in your Devuan-Cluster

_root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat

- now you can see in your output that all result they are = 0 >> it meant if you decide run the first Galera-Cluster node in node1 you must change this value to = 1

root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: 10
safe_to_bootstrap: 0
[1] 18:14:23 [SUCCESS] root@devu3.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: 9
safe_to_bootstrap: 0
[2] 18:14:23 [SUCCESS] root@devu2.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
root@devu1:~#

- now change this value safe_to_bootstrap: 0 >> toggle to = 1 >> safe_to_bootstrap: 1 >> in node1

root@devu1:~# nano /var/lib/mysql/grastate.dat

- save and close

- now run mariadb on node1


root@devu1:~# rc-service mariadb bootstrap
Bootstrapping the cluster: mariadbdStarting MariaDB database server: mariadbd.
root@devu1:~#

- check if all is ok in node1

root@devu1:~# rc-service mariadb status
/usr/bin/mariadb-admin Ver 9.1 Distrib 10.11.4-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Server version 10.11.4-MariaDB-1~deb12u1
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /run/mysqld/mysqld.sock
Uptime: 2 min 34 sec

Threads: 5 Questions: 70 Slow queries: 0 Opens: 40 Open tables: 30 Queries per second avg: 0.454.
root@devu1:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 42
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]>

- now run the others Galera-Clsuter nodes in node2 and node3, and should be all ok

root@devu1:~# parallel-ssh -i -h ~/.dd rc-service mariadb start
[1] 18:23:05 [SUCCESS] root@devu2.devucluster.any:22
Starting MariaDB database server: mariadbd . . ..
[2] 18:23:05 [SUCCESS] root@devu3.devucluster.any:22
Starting MariaDB database server: mariadbd . . ..
root@devu1:~#

- now check again to compare that all your Galera-Cluster nodes are running in your Devuan-Cluster

root@devu1:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 43
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| devuan_galera_test |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.001 sec)

MariaDB [(none)]>


- CONGRATULATION your Galera-Cluster and your Shared File System GlusterFS are running Clean and Solid.

- now let run your VM Devuan-Test in node1 (use your Virt-Manager on the side of Admin Device )

- Open your Virt-Manager and Connect with all your Cluster Nodes

102

- as I said at the beginning of the tutorial, the eth3 interface will be used for virtual machines, changes this in all nodes

103

- run now the VM in node1 ( because node1 will simulate a fail situation with poweroff )

104

- now make a test file into your Deavuan-Test VM

105

- you can see your VM Devuan-Test is running on node1 = devu1, now force shutdown your node1 and see what happen in your

Devuan-Clsuter “ remember your Virtual IP and Apache they are acitve in node1 “ it will be a good example.

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 18:52:20 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu1.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 18:52:21 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
* Online: [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
* IP-apache (ocf:heartbeat:IPaddr2): Started devu1.devucluster.any
* apache-rsc (ocf:heartbeat:apache): Started devu1.devucluster.any

root@devu1:~# poweroff -f

- please login in your node2 with other terminal tab and check your Cluster Status.

ssh root@devu2.devucluster.any -p 22

- now can you see your node1 is down, and your Virtual Ip and Apache resource is running on node2 and your Devuan-Test VM is down too.

root@devu2:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 18:55:38 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 18:55:38 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
* Online: [ devu2.devucluster.any devu3.devucluster.any ]
* OFFLINE: [ devu1.devucluster.any ]

Full List of Resources:
* IP-apache (ocf:heartbeat:IPaddr2): Started devu2.devucluster.any
* apache-rsc (ocf:heartbeat:apache): Started devu2.devucluster.any

root@devu2:~#


- see your Devuan-Test VM is down

106

- now check Gluster status and Galere-Clsuter status

- GlusterFS first


root@devu2:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 748K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[1] 18:59:10 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 740K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 18:59:13 [FAILURE] root@devu1.devucluster.any:22 Exited with error code 255
Stderr: ssh: connect to host devu1.devucluster.any port 22: No route to host
root@devu2:~#

- now check Galera-Cluster Nodes status


root@devu2:~# mysql -p -u root
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 40
Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]>

- now you can see, you have only 2 node running on Galera-Cluster now

- now run the Replicant Clone of your Devuan-Test VM in node2 and check the TEST file and his integrity

107

- now you can see, your Devuan-Test VM is running Clean and Solid without Problem in the other node2 because node1 is gone.

And the integrity of the file is complete.

- now turn on your node1 again a see what happen !

- first check status Cluster

root@devu1:~# crm status
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-12-26 19:16:31 +01:00)
Cluster Summary:
* Stack: corosync
* Current DC: devu2.devucluster.any (version 2.1.5-a3f44794f94) - partition with quorum
* Last updated: Tue Dec 26 19:16:32 2023
* Last change: Tue Dec 26 17:51:43 2023 by root via cibadmin on devu1.devucluster.any
* 3 nodes configured
* 2 resource instances configured

Node List:
*
Online : [ devu1.devucluster.any devu2.devucluster.any devu3.devucluster.any ]

Full List of Resources:
*
IP-apache (ocf:heartbeat:IPaddr2): Started devu1.devucluster.any
*
apache-rsc (ocf:heartbeat:apache): Started devu1.devucluster.any

root@devu1:~#


- now you can see your Virtual IP and Apache resource return again in node1

- Check GlusterFS status

root@devu1:~# df -h && parallel-ssh -i -h ~/.dd df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 812K 1.6G 1% /run
/dev/mapper/devu1--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
glus3.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
[1] 19:20:06 [SUCCESS] root@devu3.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 740K 1.6G 1% /run
/dev/mapper/devu3--vg-root 233G 36G 186G 16% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 33M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus2.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
[2] 19:20:06 [SUCCESS] root@devu2.devucluster.any:22
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 792K 1.6G 1% /run
/dev/mapper/devu2--vg-root 227G 36G 180G 17% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.1G 48M 3.1G 2% /dev/shm
/dev/sda1 455M 90M 341M 21% /boot
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
glus1.gluscluster.glu:/devuan-gluster 227G 38G 180G 18% /mnt/devuan-gluster
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@devu1:~#


- Ooh ! There is a surprise for myself as Autor this Tutorial, in node1 is mounted Automatic on boot the Gluster Shared File System.

- now because your mariadb service not run on boot you must reconnect your Galera-Cluster Nodes.

- First check the status in all Cluster Nodes of safe_to_bootstrap:

root@devu1:~# cat /var/lib/mysql/grastate.dat && parallel-ssh -i -h ~/.dd cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
[1] 19:30:45 [SUCCESS] root@devu2.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
[2] 19:30:45 [SUCCESS] root@devu3.devucluster.any:22
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 0
root@devu1:~#

- now the value of your node1 is = 0, it mean, toggle the value to = 1 and run again mariadb service only in node1, because in node2 and node3 is ready running

root@devu1:~# nano /var/lib/mysql/grastate.dat


GNU nano 7.2 /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 30c2c7d8-a332-11ee-8915-7f283c524539
seqno: -1
safe_to_bootstrap: 1


###############################################################################################################################*

- save and close, and start your mariadb service on node1

root@devu1:~# rc-service mariadb start
Starting MariaDB database server: mariadbd . . ..
root@devu1:~#

- check now the status of Galera-Cluster Nodes again

root@devu1:~# mysql -p -u root
Enter password:
*Welcome to the MariaDB monitor. Commands end with ; or \g.
*Your MariaDB connection id is 40

Server version: 10.11.4-MariaDB-1~deb12u1 Debian 12

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| devuan_galera_test |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.001 sec)

MariaDB [(none)]>

- now you can see all Services and Features, in your Devuan-Cluster are running again.

################################################################################################################################

################################################################################################################################

################################################################################################################################

Voilà ! You have a Devuan-Cluster (all is not Prefect , but it run )

<<<<<<<< >>>>>>>> ######### = ##########
I Attachment Action Size Date Who Comment
1EXT 1 manage 59 K 28 Dec 2023 - 16:58 ElroijaH  
10EXT 10 manage 43 K 28 Dec 2023 - 17:22 ElroijaH  
100EXT 100 manage 99 K 28 Dec 2023 - 20:03 ElroijaH  
101EXT 101 manage 105 K 28 Dec 2023 - 20:04 ElroijaH  
102EXT 102 manage 48 K 28 Dec 2023 - 20:06 ElroijaH  
103EXT 103 manage 37 K 28 Dec 2023 - 20:08 ElroijaH  
104EXT 104 manage 21 K 28 Dec 2023 - 20:09 ElroijaH  
105EXT 105 manage 17 K 28 Dec 2023 - 20:12 ElroijaH  
106EXT 106 manage 26 K 28 Dec 2023 - 20:14 ElroijaH  
107EXT 107 manage 17 K 28 Dec 2023 - 20:15 ElroijaH  
11EXT 11 manage 97 K 28 Dec 2023 - 17:24 ElroijaH  
12EXT 12 manage 49 K 28 Dec 2023 - 17:26 ElroijaH  
13EXT 13 manage 51 K 28 Dec 2023 - 17:27 ElroijaH  
14EXT 14 manage 49 K 28 Dec 2023 - 17:29 ElroijaH  
15EXT 15 manage 33 K 28 Dec 2023 - 17:30 ElroijaH  
16EXT 16 manage 75 K 28 Dec 2023 - 17:31 ElroijaH  
17EXT 17 manage 86 K 28 Dec 2023 - 17:36 ElroijaH  
18EXT 18 manage 39 K 28 Dec 2023 - 17:38 ElroijaH  
19EXT 19 manage 27 K 28 Dec 2023 - 17:40 ElroijaH  
2EXT 2 manage 46 K 28 Dec 2023 - 17:06 ElroijaH  
20EXT 20 manage 25 K 28 Dec 2023 - 17:41 ElroijaH  
21EXT 21 manage 47 K 28 Dec 2023 - 17:44 ElroijaH  
22EXT 22 manage 51 K 28 Dec 2023 - 17:46 ElroijaH  
23EXT 23 manage 67 K 28 Dec 2023 - 17:51 ElroijaH  
24EXT 24 manage 70 K 28 Dec 2023 - 17:52 ElroijaH  
25EXT 25 manage 122 K 28 Dec 2023 - 17:53 ElroijaH  
26EXT 26 manage 74 K 28 Dec 2023 - 17:54 ElroijaH  
27EXT 27 manage 73 K 28 Dec 2023 - 17:56 ElroijaH  
28EXT 28 manage 121 K 28 Dec 2023 - 17:57 ElroijaH  
29EXT 29 manage 73 K 28 Dec 2023 - 17:58 ElroijaH  
3EXT 3 manage 46 K 28 Dec 2023 - 17:08 ElroijaH  
30EXT 30 manage 73 K 28 Dec 2023 - 18:01 ElroijaH  
31EXT 31 manage 121 K 28 Dec 2023 - 18:02 ElroijaH  
32EXT 32 manage 121 K 28 Dec 2023 - 18:09 ElroijaH  
33EXT 33 manage 73 K 28 Dec 2023 - 18:10 ElroijaH  
34EXT 34 manage 73 K 28 Dec 2023 - 18:12 ElroijaH  
35EXT 35 manage 95 K 28 Dec 2023 - 18:16 ElroijaH  
36EXT 36 manage 94 K 28 Dec 2023 - 18:18 ElroijaH  
37EXT 37 manage 132 K 28 Dec 2023 - 18:19 ElroijaH  
38EXT 38 manage 39 K 28 Dec 2023 - 18:20 ElroijaH  
39EXT 39 manage 64 K 28 Dec 2023 - 18:21 ElroijaH  
4EXT 4 manage 25 K 28 Dec 2023 - 17:12 ElroijaH  
40EXT 40 manage 123 K 28 Dec 2023 - 18:26 ElroijaH  
41EXT 41 manage 24 K 28 Dec 2023 - 18:28 ElroijaH  
42EXT 42 manage 59 K 28 Dec 2023 - 18:29 ElroijaH  
43EXT 43 manage 76 K 28 Dec 2023 - 18:30 ElroijaH  
44EXT 44 manage 22 K 28 Dec 2023 - 18:31 ElroijaH  
45EXT 45 manage 83 K 28 Dec 2023 - 18:33 ElroijaH  
46EXT 46 manage 57 K 28 Dec 2023 - 18:34 ElroijaH  
47EXT 47 manage 96 K 28 Dec 2023 - 18:36 ElroijaH  
48EXT 48 manage 57 K 28 Dec 2023 - 18:37 ElroijaH  
49EXT 49 manage 132 K 28 Dec 2023 - 18:38 ElroijaH  
5EXT 5 manage 102 K 28 Dec 2023 - 17:14 ElroijaH  
50EXT 50 manage 65 K 28 Dec 2023 - 18:45 ElroijaH  
51EXT 51 manage 87 K 28 Dec 2023 - 18:46 ElroijaH  
52EXT 52 manage 82 K 28 Dec 2023 - 18:47 ElroijaH  
53EXT 53 manage 134 K 28 Dec 2023 - 18:49 ElroijaH  
54EXT 54 manage 39 K 28 Dec 2023 - 18:50 ElroijaH  
55EXT 55 manage 53 K 28 Dec 2023 - 18:51 ElroijaH  
56EXT 56 manage 49 K 28 Dec 2023 - 18:54 ElroijaH  
57EXT 57 manage 50 K 28 Dec 2023 - 18:55 ElroijaH  
58EXT 58 manage 14 K 28 Dec 2023 - 18:57 ElroijaH  
59EXT 59 manage 73 K 28 Dec 2023 - 18:58 ElroijaH  
6EXT 6 manage 115 K 28 Dec 2023 - 17:15 ElroijaH  
60EXT 60 manage 70 K 28 Dec 2023 - 18:58 ElroijaH  
61EXT 61 manage 71 K 28 Dec 2023 - 18:59 ElroijaH  
62EXT 62 manage 58 K 28 Dec 2023 - 19:00 ElroijaH  
63EXT 63 manage 61 K 28 Dec 2023 - 19:01 ElroijaH  
64EXT 64 manage 87 K 28 Dec 2023 - 19:04 ElroijaH  
65EXT 65 manage 131 K 28 Dec 2023 - 19:06 ElroijaH  
66EXT 66 manage 128 K 28 Dec 2023 - 19:08 ElroijaH  
67EXT 67 manage 68 K 28 Dec 2023 - 19:10 ElroijaH  
68EXT 68 manage 43 K 28 Dec 2023 - 19:11 ElroijaH  
69EXT 69 manage 43 K 28 Dec 2023 - 19:12 ElroijaH  
7EXT 7 manage 115 K 28 Dec 2023 - 17:16 ElroijaH  
70EXT 70 manage 135 K 28 Dec 2023 - 19:13 ElroijaH  
71EXT 71 manage 114 K 28 Dec 2023 - 19:14 ElroijaH  
72EXT 72 manage 98 K 28 Dec 2023 - 19:15 ElroijaH  
73EXT 73 manage 122 K 28 Dec 2023 - 19:17 ElroijaH  
74EXT 74 manage 95 K 28 Dec 2023 - 19:19 ElroijaH  
75EXT 75 manage 55 K 28 Dec 2023 - 19:20 ElroijaH  
76EXT 76 manage 64 K 28 Dec 2023 - 19:21 ElroijaH  
77EXT 77 manage 62 K 28 Dec 2023 - 19:29 ElroijaH  
78EXT 78 manage 62 K 28 Dec 2023 - 19:31 ElroijaH  
79EXT 79 manage 66 K 28 Dec 2023 - 19:32 ElroijaH  
8EXT 8 manage 44 K 28 Dec 2023 - 17:18 ElroijaH  
80EXT 80 manage 58 K 28 Dec 2023 - 19:33 ElroijaH  
81EXT 81 manage 41 K 28 Dec 2023 - 19:35 ElroijaH  
82EXT 82 manage 60 K 28 Dec 2023 - 19:37 ElroijaH  
83EXT 83 manage 53 K 28 Dec 2023 - 19:38 ElroijaH  
84EXT 84 manage 46 K 28 Dec 2023 - 19:39 ElroijaH  
85EXT 85 manage 44 K 28 Dec 2023 - 19:41 ElroijaH  
86EXT 86 manage 47 K 28 Dec 2023 - 19:42 ElroijaH  
87EXT 87 manage 57 K 28 Dec 2023 - 19:43 ElroijaH  
88EXT 88 manage 54 K 28 Dec 2023 - 19:45 ElroijaH  
89EXT 89 manage 49 K 28 Dec 2023 - 19:46 ElroijaH  
9EXT 9 manage 43 K 28 Dec 2023 - 17:20 ElroijaH  
90EXT 90 manage 59 K 28 Dec 2023 - 19:47 ElroijaH  
91EXT 91 manage 84 K 28 Dec 2023 - 19:49 ElroijaH  
92EXT 92 manage 38 K 28 Dec 2023 - 19:53 ElroijaH  
93EXT 93 manage 50 K 28 Dec 2023 - 19:54 ElroijaH  
94EXT 94 manage 35 K 28 Dec 2023 - 19:56 ElroijaH  
95EXT 95 manage 86 K 28 Dec 2023 - 19:57 ElroijaH  
96EXT 96 manage 47 K 28 Dec 2023 - 19:59 ElroijaH  
97EXT 97 manage 62 K 28 Dec 2023 - 20:00 ElroijaH  
99EXT 99 manage 86 K 28 Dec 2023 - 20:01 ElroijaH  
Topic revision: r2 - 29 Dec 2023, ElroijaH
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Devuan Wiki? Send feedback