In all the systems that I am working with DRBD after verification that there are many messages in the log.
kernel: block drbd0: Out of sync: start=403446112, size=328 (sectors)
In some system might think it is by the workload, but there are some teams that are not nearly work.
The computers are connected in a network with 1Gb quality
These messages do not give me much fiablidad the system and that ultimately require cron to check the timing, and reset the fault blocks, which converts a synchronous system of course, in an asynchronous system.
Is this normal? Any solution? Any wrong?
common {
protocol C;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"
}
syncer {
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
verify-alg sha1;
rate 40M;
}
}
resource r0 {
protocol C;
startup {
wfc-timeout 15; # non-zero wfc-timeout can be dangerous (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
degr-wfc-timeout 60;
}
net {
cram-hmac-alg sha1;
shared-secret "XXXXXXXXXX";
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
on pro01 {
device /dev/drbd0;
disk /dev/pve/vm-100-disk-1;
address YYY.YYY.YYY.YYY:7788;
meta-disk internal;
}
on pro02 {
device /dev/drbd0;
disk /dev/pve/vm-100-disk-1;
address YYY.YYY.YYY.YYY:7788;
meta-disk internal;
}
}
It might happen from time to time and it's normal.
Just do disconnect and connect again - then out-of-sync blocks will be synced.
DRBD - online verify