diff options
| author | Alison Schofield <alison.schofield@intel.com> | 2026-01-16 07:58:37 +0300 |
|---|---|---|
| committer | Dave Jiang <dave.jiang@intel.com> | 2026-01-23 02:58:14 +0300 |
| commit | b51792fd9168e581e51be98e22df5f79454e22de (patch) | |
| tree | 5b510aa783b98cbb0db033ee6e9a6826ae45bd0f | |
| parent | e639055f1f30311db91cafb36e408cc727c7d445 (diff) | |
| download | linux-b51792fd9168e581e51be98e22df5f79454e22de.tar.xz | |
cxl/region: Translate HPA to DPA and memdev in unaligned regions
The CXL driver supports an expert user debugfs interface to inject and
clear poison by a region offset. That feature requires translating a
HPA (the region address) to a DPA and a memdev to perform the poison
operation.
Unaligned regions do not have an algebraically invertible mapping
from HPA to DPA due to the region offset skew. The region base is not
aligned to a full interleave. Add a helper to perform the unaligned
translations that first calculates the DPA offset and then tests it
against each candidate endpoint decoder.
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Alison Schofield <alison.schofield@intel.com>
Link: https://patch.msgid.link/f338b7aff7e4574fcc525b1a0d4f09786bfb6489.1768538962.git.alison.schofield@intel.com
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
| -rw-r--r-- | drivers/cxl/core/region.c | 46 |
1 files changed, 46 insertions, 0 deletions
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index cdfa454b940d..d5979000fba1 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3314,6 +3314,48 @@ struct dpa_result { u64 dpa; }; +static int unaligned_region_offset_to_dpa_result(struct cxl_region *cxlr, + u64 offset, + struct dpa_result *result) +{ + struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); + struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld; + struct cxl_region_params *p = &cxlr->params; + u64 interleave_width, interleave_index; + u64 gran, gran_offset, dpa_offset; + u64 hpa = p->res->start + offset; + + /* + * Unaligned addresses are not algebraically invertible. Calculate + * a dpa_offset independent of the target device and then enumerate + * and test that dpa_offset against each candidate endpoint decoder. + */ + gran = cxld->interleave_granularity; + interleave_width = gran * cxld->interleave_ways; + interleave_index = div64_u64(offset, interleave_width); + gran_offset = div64_u64_rem(offset, gran, NULL); + + dpa_offset = interleave_index * gran + gran_offset; + + for (int i = 0; i < p->nr_targets; i++) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + int pos = cxled->pos; + u64 test_hpa; + + test_hpa = unaligned_dpa_to_hpa(cxld, p, pos, dpa_offset); + if (test_hpa == hpa) { + result->cxlmd = cxled_to_memdev(cxled); + result->dpa = + cxl_dpa_resource_start(cxled) + dpa_offset; + return 0; + } + } + dev_err(&cxlr->dev, + "failed to resolve HPA %#llx in unaligned MOD3 region\n", hpa); + + return -ENXIO; +} + static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, struct dpa_result *result) { @@ -3343,6 +3385,10 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, hpa_offset = offset; } + if (region_is_unaligned_mod3(cxlr)) + return unaligned_region_offset_to_dpa_result(cxlr, offset, + result); + pos = cxl_calculate_position(hpa_offset, eiw, eig); if (pos < 0 || pos >= p->nr_targets) { dev_dbg(&cxlr->dev, "Invalid position %d for %d targets\n", |
