Latency measurements have objectives completely different from throughput benchmarks: in I/O latency tests, one writes a very small chunk of data (ideally the smallest chunk of data that the system can deal with), and observes the time it takes to complete that write. The process is usually repeated several times to account for normal statistical fluctuations.
Just as throughput measurements, I/O latency measurements may be
performed using the ubiquitous dd
utility, albeit with different
settings and an entirely different focus of observation.
Provided below is a simple dd
-based latency micro-benchmark,
assuming you have a scratch resource named test
which is currently
connected and in the secondary role on both nodes:
# TEST_RESOURCE=test # TEST_DEVICE=$(drbdadm sh-dev $TEST_RESOURCE) # TEST_LL_DEVICE=$(drbdadm sh-ll-dev $TEST_RESOURCE) # drbdadm primary $TEST_RESOURCE # dd if=/dev/zero of=$TEST_DEVICE bs=512 count=1000 oflag=direct # drbdadm down $TEST_RESOURCE # dd if=/dev/zero of=$TEST_LL_DEVICE bs=512 count=1000 oflag=direct
This test writes 1,000 512-byte chunks of data to your DRBD device, and then to its backing device for comparison. 512 bytes is the smallest block size a Linux system (on all architectures except s390) is expected to handle.
It is important to understand that throughput measurements generated
by dd
are completely irrelevant for this test; what is important is
the time elapsed during the completion of said 1,000 writes. Dividing
this time by 1,000 gives the average latency of a single sector write.