Public:Esdcluster — различия между версиями
Demi (обсуждение | вклад) |
Demi (обсуждение | вклад) м (Demi переименовал страницу Esdcluster в Public:Esdcluster) |
||
(не показаны 2 промежуточные версии этого же участника) | |||
Строка 1: | Строка 1: | ||
+ | Особое внимание документации - очередности действий: | ||
+ | |||
+ | https://docs.linbit.com/man/v9/ | ||
+ | |||
+ | Конфигурим на обоих машинах основной конф. файл DRBD системы: | ||
+ | |||
+ | root@esd1-deb10:/mnt# cat /etc/drbd.d/global_common.conf | ||
+ | # DRBD is the result of over a decade of development by LINBIT. | ||
+ | # In case you need professional services for DRBD or have | ||
+ | # feature requests visit http://www.linbit.com | ||
+ | |||
+ | global { | ||
+ | usage-count yes; | ||
+ | |||
+ | # Decide what kind of udev symlinks you want for "implicit" volumes | ||
+ | # (those without explicit volume <vnr> {} block, implied vnr=0): | ||
+ | # /dev/drbd/by-resource/<resource>/<vnr> (explicit volumes) | ||
+ | # /dev/drbd/by-resource/<resource> (default for implict) | ||
+ | ### udev-always-use-vnr; # treat implicit the same as explicit volumes | ||
+ | |||
+ | # minor-count dialog-refresh disable-ip-verification | ||
+ | # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600; | ||
+ | } | ||
+ | |||
+ | common { | ||
+ | handlers { | ||
+ | # These are EXAMPLE handlers only. | ||
+ | # They may have severe implications, | ||
+ | # like hard resetting the node under certain circumstances. | ||
+ | # Be careful when choosing your poison. | ||
+ | |||
+ | # pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; | ||
+ | # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; | ||
+ | # local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; | ||
+ | # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; | ||
+ | # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; | ||
+ | # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; | ||
+ | # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; | ||
+ | # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; | ||
+ | # quorum-lost "/usr/lib/drbd/notify-quorum-lost.sh root"; | ||
+ | } | ||
+ | |||
+ | startup { | ||
+ | # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb | ||
+ | } | ||
+ | |||
+ | options { | ||
+ | # cpu-mask on-no-data-accessible | ||
+ | |||
+ | # RECOMMENDED for three or more storage nodes with DRBD 9: | ||
+ | # quorum majority; | ||
+ | # on-no-quorum suspend-io | io-error; | ||
+ | } | ||
+ | |||
+ | disk { | ||
+ | # size on-io-error fencing disk-barrier disk-flushes | ||
+ | # disk-drain md-flushes resync-rate resync-after al-extents | ||
+ | # c-plan-ahead c-delay-target c-fill-target c-max-rate | ||
+ | # c-min-rate disk-timeout | ||
+ | } | ||
+ | |||
+ | net { | ||
+ | protocol C; | ||
+ | ### verify-alg crc32c; | ||
+ | # protocol timeout max-epoch-size max-buffers | ||
+ | # connect-int ping-int sndbuf-size rcvbuf-size ko-count | ||
+ | # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri | ||
+ | # after-sb-1pri after-sb-2pri always-asbp rr-conflict | ||
+ | # ping-timeout data-integrity-alg tcp-cork on-congestion | ||
+ | # congestion-fill congestion-extents csums-alg verify-alg | ||
+ | # use-rle | ||
+ | } | ||
+ | } | ||
+ | |||
+ | ... и дополнительный, вкотором особо уделяем внимание опции fencing: | ||
+ | |||
+ | root@esd1-deb10:/mnt# cat /etc/drbd.d/r0.res | ||
+ | resource r0 { | ||
+ | |||
+ | device /dev/drbd0; | ||
+ | meta-disk internal; | ||
+ | |||
+ | on esd1-deb10 { | ||
+ | disk /dev/mapper/esd1--deb10--vg-drbd0; | ||
+ | address 192.168.102.200:7789; | ||
+ | } | ||
+ | |||
+ | on esd2-deb10 { | ||
+ | disk /dev/mapper/esd2--deb10--vg-drbd0; | ||
+ | address 192.168.102.201:7789; | ||
+ | } | ||
+ | |||
+ | disk { | ||
+ | fencing resource-and-stonith; | ||
+ | } | ||
+ | |||
+ | handlers { | ||
+ | split-brain "/usr/lib/drbd/notify-split-brain.sh root"; | ||
+ | # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh"; | ||
+ | # after-resync-target "/usr/lib/drbd/unsnapshot-resync-target-lvm.sh"; | ||
+ | } | ||
+ | |||
+ | net { | ||
+ | allow-two-primaries yes; | ||
+ | after-sb-0pri discard-zero-changes; | ||
+ | after-sb-1pri discard-secondary; | ||
+ | after-sb-2pri disconnect; | ||
+ | } | ||
+ | |||
+ | startup { | ||
+ | become-primary-on both; | ||
+ | } | ||
+ | |||
+ | } | ||
+ | |||
+ | Теперь выполните на обоих серверах команды, каждая команда на каждом сервере одновременно: | ||
+ | |||
+ | drbdadm disconnect r0 | ||
+ | drbdadm connect r0 | ||
+ | drbdadm primary r0 | ||
+ | |||
+ | /etc/init.d/drbd restart | ||
+ | |||
Теперь у нас есть блочное устройство /dev/drbd0! | Теперь у нас есть блочное устройство /dev/drbd0! | ||
Строка 51: | Строка 174: | ||
− | Полный вывод того, что ФС собралась: | + | Полный вывод того, что ФС собралась на обоих машинах: |
root@esd1-deb10:/mnt# mounted.ocfs2 -f | root@esd1-deb10:/mnt# mounted.ocfs2 -f |
Текущая версия на 11:00, 19 ноября 2019
Особое внимание документации - очередности действий:
https://docs.linbit.com/man/v9/
Конфигурим на обоих машинах основной конф. файл DRBD системы:
root@esd1-deb10:/mnt# cat /etc/drbd.d/global_common.conf # DRBD is the result of over a decade of development by LINBIT. # In case you need professional services for DRBD or have # feature requests visit http://www.linbit.com global { usage-count yes; # Decide what kind of udev symlinks you want for "implicit" volumes # (those without explicit volume <vnr> {} block, implied vnr=0): # /dev/drbd/by-resource/<resource>/<vnr> (explicit volumes) # /dev/drbd/by-resource/<resource> (default for implict) ### udev-always-use-vnr; # treat implicit the same as explicit volumes # minor-count dialog-refresh disable-ip-verification # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600; }
common { handlers { # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when choosing your poison.
# pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; # quorum-lost "/usr/lib/drbd/notify-quorum-lost.sh root"; }
startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb }
options { # cpu-mask on-no-data-accessible
# RECOMMENDED for three or more storage nodes with DRBD 9: # quorum majority; # on-no-quorum suspend-io | io-error; }
disk { # size on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout }
net { protocol C; ### verify-alg crc32c; # protocol timeout max-epoch-size max-buffers # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle } }
... и дополнительный, вкотором особо уделяем внимание опции fencing:
root@esd1-deb10:/mnt# cat /etc/drbd.d/r0.res resource r0 {
device /dev/drbd0; meta-disk internal;
on esd1-deb10 { disk /dev/mapper/esd1--deb10--vg-drbd0; address 192.168.102.200:7789; }
on esd2-deb10 { disk /dev/mapper/esd2--deb10--vg-drbd0; address 192.168.102.201:7789; }
disk { fencing resource-and-stonith; }
handlers { split-brain "/usr/lib/drbd/notify-split-brain.sh root";
- before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh";
- after-resync-target "/usr/lib/drbd/unsnapshot-resync-target-lvm.sh";
}
net { allow-two-primaries yes; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; }
startup { become-primary-on both; }
}
Теперь выполните на обоих серверах команды, каждая команда на каждом сервере одновременно:
drbdadm disconnect r0 drbdadm connect r0 drbdadm primary r0
/etc/init.d/drbd restart
Теперь у нас есть блочное устройство /dev/drbd0!
root@esd1-deb10:/mnt# cat /proc/drbd version: 8.4.10 (api:1/proto:86-101) srcversion: 9B4D87C5E865DF526864868
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- ns:210772764 nr:529007 dw:1593007 dr:210779726 al:1826 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
root@esd2-deb10:/mnt# cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101) srcversion: 15055BDD6F0D23278182874 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- ns:529014 nr:210772771 dw:211301785 dr:1061397 al:15 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
Создаем распределенную файловую систему на Distributed Replicated Block Device (DRBD):
mkfs.ocfs2 -N 2 -L ocfs2_drbd0_o2cb --cluster-stack o2cb --cluster-name esdcluster --global-heartbeat -F /dev/drbd0
mkfs.ocfs2 1.8.5 Cluster stack: o2cb Cluster name: esdcluster Stack Flags: 0x1 NOTE: Feature extended slot map may be enabled Overwriting existing ocfs2 partition. WARNING: Cluster check disabled. Proceed (y/N): y Label: ocfs2_drbd0_o2cb Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg append-dio Block size: 4096 (12 bits) Cluster size: 4096 (12 bits) Volume size: 214741774336 (52427191 clusters) (52427191 blocks) Cluster groups: 1626 (tail covers 11191 clusters, rest cover 32256 clusters) Extent allocator size: 109051904 (26 groups) Journal size: 268435456 Node slots: 2 Creating bitmaps: done Initializing superblock: done Writing system files: done Writing superblock: done Writing backup superblock: 4 block(s) Formatting Journals: done Growing extent allocator: done Formatting slot map: done Formatting quota files: done Writing lost+found: done mkfs.ocfs2 successful
Полный вывод того, что ФС собралась на обоих машинах:
root@esd1-deb10:/mnt# mounted.ocfs2 -f Device Stack Cluster F Nodes /dev/mapper/esd1--deb10--vg-drbd0 o2cb esd esd1-deb10, esd2-deb10 /dev/drbd0 o2cb esd esd1-deb10, esd2-deb10
root@esd2-deb10:/mnt# mounted.ocfs2 -f Device Stack Cluster F Nodes /dev/mapper/esd2--deb10--vg-drbd0 o2cb esd esd1-deb10, esd2-deb10 /dev/drbd0 o2cb esd esd1-deb10, esd2-deb10
... и краткий:
root@esd1-deb10:/mnt# mounted.ocfs2 -d Device Stack Cluster F UUID Label /dev/mapper/esd1--deb10--vg-drbd0 o2cb esd 6043DDD2A6444C4CA370575F40E50723 ocfs2_drbd0 /dev/drbd0 o2cb esd 6043DDD2A6444C4CA370575F40E50723 ocfs2_drbd0
root@esd2-deb10:/mnt# mounted.ocfs2 -d Device Stack Cluster F UUID Label /dev/mapper/esd2--deb10--vg-drbd0 o2cb esd 6043DDD2A6444C4CA370575F40E50723 ocfs2_drbd0 /dev/drbd0 o2cb esd 6043DDD2A6444C4CA370575F40E50723 ocfs2_drbd0