diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-04-28 13:12:43 +0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-04-28 19:58:21 +0400 |
commit | 308c05e35e3517d19bb67a7e97772235c9e15cd7 (patch) | |
tree | 75d0eae800ef1fc7297f97262b42ddbd1347cad0 /include | |
parent | 2301696932b55e2ea2085cefc84f7b94fa2dd54b (diff) | |
download | linux-308c05e35e3517d19bb67a7e97772235c9e15cd7.tar.xz |
sparsemem: vmemmap does not need section bits
A set of patches that attempts to improve page flag handling. First of all a
method is introduced to generate the page flag functions using macros. Then
the number of page flags used by sparsemem is reduced. All page flag
operations will no longer be macros. All flags will use inline function.
Then we add a way to export enum constants to the preprocessor which allows us
to get rid of __ZONE_COUNT and use the NR_PAGEFLAGS for the dynamic
calculation of actually available page flags for fields.
This patch:
Sparsemem vmemmap does not need any section bits. This patch has the effect
of reducing the number of bits used in page->flags by at least 6.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/mm.h | 13 |
1 files changed, 9 insertions, 4 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index ca973359fe5f..24659ed06bae 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -395,11 +395,11 @@ static inline void set_compound_order(struct page *page, unsigned long order) * we have run out of space and have to fall back to an * alternate (slower) way of determining the node. * - * No sparsemem: | NODE | ZONE | ... | FLAGS | - * with space for node: | SECTION | NODE | ZONE | ... | FLAGS | - * no space for node: | SECTION | ZONE | ... | FLAGS | + * No sparsemem or sparsemem vmemmap: | NODE | ZONE | ... | FLAGS | + * classic sparse with space for node:| SECTION | NODE | ZONE | ... | FLAGS | + * classic sparse no space for node: | SECTION | ZONE | ... | FLAGS | */ -#ifdef CONFIG_SPARSEMEM +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTIONS_WIDTH SECTIONS_SHIFT #else #define SECTIONS_WIDTH 0 @@ -410,6 +410,9 @@ static inline void set_compound_order(struct page *page, unsigned long order) #if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT <= FLAGS_RESERVED #define NODES_WIDTH NODES_SHIFT #else +#ifdef CONFIG_SPARSEMEM_VMEMMAP +#error "Vmemmap: No space for nodes field in page flags" +#endif #define NODES_WIDTH 0 #endif @@ -502,10 +505,12 @@ static inline struct zone *page_zone(struct page *page) return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; } +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) static inline unsigned long page_to_section(struct page *page) { return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; } +#endif static inline void set_page_zone(struct page *page, enum zone_type zone) { |