summaryrefslogtreecommitdiff
path: root/arch/s390/oprofile/hwsampler.h
diff options
context:
space:
mode:
authorYinghai Lu <yinghai@kernel.org>2011-05-02 19:24:49 +0400
committerTejun Heo <tj@kernel.org>2011-05-02 19:24:49 +0400
commite5a10c1bd12a5d71bbb6406c1b0dbbc9d8958397 (patch)
tree1b8ee2a5cddd890e2058f167155f396ae9d69f40 /arch/s390/oprofile/hwsampler.h
parenta56bca80db8903bb557b9ac38da68dc5b98ea672 (diff)
downloadlinux-e5a10c1bd12a5d71bbb6406c1b0dbbc9d8958397.tar.xz
x86, NUMA: Trim numa meminfo with max_pfn in a separate loop
During testing 32bit numa unifying code from tj, found one system with more than 64g fails to use numa. It turns out we do not trim numa meminfo correctly against max_pfn in case start address of a node is higher than 64GiB. Bug fix made it to tip tree. This patch moves the checking and trimming to a separate loop. So we don't need to compare low/high in following merge loops. It makes the code more readable. Also it makes the node merge printouts less strange. On a 512GiB numa system with 32bit, before: > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000) > NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000) after: > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000) > NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000) Signed-off-by: Yinghai Lu <yinghai@kernel.org> [Updated patch description and comment slightly.] Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'arch/s390/oprofile/hwsampler.h')
0 files changed, 0 insertions, 0 deletions