diff options
author | James Smart <jsmart2021@gmail.com> | 2017-11-21 03:00:36 +0300 |
---|---|---|
committer | Martin K. Petersen <martin.petersen@oracle.com> | 2017-12-05 04:32:54 +0300 |
commit | bcb24f6577b9461267f350d11e1bb6dda470f241 (patch) | |
tree | e1f91806e17fac8f258bb10d11af4a82b13a5345 /drivers/scsi/lpfc/lpfc_attr.c | |
parent | 07d494f7533e6d9c22931f6e4a2e048560063081 (diff) | |
download | linux-bcb24f6577b9461267f350d11e1bb6dda470f241.tar.xz |
scsi: lpfc: Adjust default value of lpfc_nvmet_mrq
The current default for async hw receive queues is 1, which presents
issues under heavy load as number of queues influence the available
async receive buffer limits.
Raise the default to the either the current hw limit (16) or the number
of hw qs configured (io channel value).
Revise the attribute definition for mrq to better reflect what we do for
hw queues. E.g. 0 means default to optimal (# of cpus), non-zero
specifies a specific limit. Before this change, mrq=0 meant target mode
was disabled. As 0 now has a different meaning, rework the if tests to
use the better nvmet_support check.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'drivers/scsi/lpfc/lpfc_attr.c')
-rw-r--r-- | drivers/scsi/lpfc/lpfc_attr.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index 82f6e219ee34..5d83734f6c68 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -3366,12 +3366,13 @@ LPFC_ATTR_R(suppress_rsp, 1, 0, 1, /* * lpfc_nvmet_mrq: Specify number of RQ pairs for processing NVMET cmds + * lpfc_nvmet_mrq = 0 driver will calcualte optimal number of RQ pairs * lpfc_nvmet_mrq = 1 use a single RQ pair * lpfc_nvmet_mrq >= 2 use specified RQ pairs for MRQ * */ LPFC_ATTR_R(nvmet_mrq, - 1, 1, 16, + LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_MAX, "Specify number of RQ pairs for processing NVMET cmds"); /* @@ -6362,6 +6363,9 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) phba->cfg_nvmet_fb_size = LPFC_NVMET_FB_SZ_MAX; } + if (!phba->cfg_nvmet_mrq) + phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + /* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */ if (phba->cfg_nvmet_mrq > phba->cfg_nvme_io_channel) { phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; @@ -6369,10 +6373,13 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) "6018 Adjust lpfc_nvmet_mrq to %d\n", phba->cfg_nvmet_mrq); } + if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; + } else { /* Not NVME Target mode. Turn off Target parameters. */ phba->nvmet_support = 0; - phba->cfg_nvmet_mrq = 0; + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF; phba->cfg_nvmet_fb_size = 0; } |