diff options
author | Shivasharan S <shivasharan.srikanteshwara@broadcom.com> | 2017-02-10 11:59:13 +0300 |
---|---|---|
committer | Martin K. Petersen <martin.petersen@oracle.com> | 2017-02-13 15:26:22 +0300 |
commit | a48ba0eca0456d45e920169930569caa3fc57124 (patch) | |
tree | bfd7c0d825c967213b213c5ca1e5fa706d7c4598 /scripts/conmakehash.c | |
parent | 33203bc4d61b33f1f7bb736eac0c6fdd20b92397 (diff) | |
download | linux-a48ba0eca0456d45e920169930569caa3fc57124.tar.xz |
scsi: megaraid_sas: raid 1 write performance for large io
Avoid Host side PCI bandwidth bottleneck and hint FW to do Write
buffering using RaidFlag MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT. Once
IO is landed in FW with MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT, it will
do single DMA from host and buffer the Write operation. On back end, FW
will DMA same buffer to the Mirror and Data Arm. This will improve
large block IO performance which bottleneck due to Host side PCI
bandwidth limitation.
Consistent ~4000MB T.P for 256K Block size is expected performance
numbers. IOPS for small Block size should be on par with Disk
performance. (E.g 42 SAS Disk in JBOD mode gives 3700MB T.P. Same
Drive used in R1 WT mode, should give ~1800MB T.P)
Using this patch 24 R1 VDs (HDD) gives below performance for Sequential
Write. Without this patch, we cannot reach above 3200MB (Throughput is
in MB.)
Block Size 50% 256K and 50% 4K 100% 256K
4K 3100 2030
8K 3140 2740
16K 3140 3140
32K 3400 3240
64K 3500 3700
128K 3870 3870
256K 3920 3920
Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'scripts/conmakehash.c')
0 files changed, 0 insertions, 0 deletions