summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/vmlinux_64.lds.S
AgeCommit message (Collapse)AuthorFilesLines
2009-04-29x86, vmlinux.lds: unify remaining partsSam Ravnborg1-24/+0
32 bit: - explicit page align .bss - move ALING() out of .brk output section - discard *(.eh_frame) 64 bit: - move ALIGN() out of .bss output section - move ALIGN() out of .brk output section - use a dedicated section to define _end [ Impact: unify and fix section alignments in linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-13-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify percpuSam Ravnborg1-26/+0
32 bit: - move __init_end outside the .bss output section It really did not belong in there [ Impact: 64-bit: cleanup, 32-bit: refactor linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-12-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify .exit.* and .init.ramfsSam Ravnborg1-21/+0
[ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-11-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify parainstructionsSam Ravnborg1-18/+0
32 bit: - increase alignment from 4 to 8 for .parainstructions - increase alignment from 4 to 8 for .altinstructions 64 bit: - move ALIGN() outside output section for .altinstructions None of the above should result in any functional change. [ Impact: refactor and unify linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-10-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify first part of initdataSam Ravnborg1-59/+0
32-bit: - Move definition of __init_begin outside output_section because it covers more than one section - Move ALIGN() for end-of-section inside .smp_locks output section. Same effect but the intent is better documented that we need both start and end aligned. 64-bit: - Move ALIGN() outside output section in .init.setup - Deleted unused __smp_alt_* symbols None of the above should result in any functional change. [ Impact: refactor and unify linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-9-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: move vsyscall output sectionsSam Ravnborg1-68/+0
[ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-8-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify data output sectionsSam Ravnborg1-28/+0
For 64 bit the following functional changes are introduced: - .data.page_aligned has moved - .data.cacheline_aligned has moved - .data.read_mostly has moved - ALIGN() moved out of output section for .data.cacheline_aligned - ALIGN() moved out of output section for .data.page_aligned Notice that 32 bit and 64 bit has different location of _edata. .data_nosave is 32 bit only as 64 bit is special due to PERCPU. [ Impact: 32-bit: cleanup, 64-bit: use 32-bit linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-7-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify exception tableSam Ravnborg1-10/+0
[ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-6-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify .text output sectionsSam Ravnborg1-20/+0
32 bit x86 had a dedicated .text.head output section, whereas 64 bit had it all in a single output section. In the unified version the dedicated .text.head output section was kept to have full control over the head code. 32 bit: - Moved definition of _stext to the linker script. The definition is located _after_ .text.page_aligned as this is what 32 bit did before. The ALIGN(8) was introduced so we hit the exact same address (on the tested config) before and after the move. I assume that it is a bug that _stext did not cover the .text.page_aligned section - if this is true it can be fixed in a follow-up patch (and the ugly ALIGN() can be dropped). [ Impact: 64-bit: cleanup, 32-bit: use the 64-bit linker script ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-5-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify start/end of SECTIONSSam Ravnborg1-9/+0
[ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-4-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify PHDRSSam Ravnborg1-11/+0
PHDRS are not equal for the two - so use ifdefs to cover up for that. On the assumption that they may become equal the ifdef is inside the PHDRS definiton. [ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-3-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-29x86, vmlinux.lds: unify header/footerSam Ravnborg1-42/+0
Merge everything except PHDRS and SECTIONS into vmlinux.lds.S. [ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-2-git-send-email-sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-27x86: beautify vmlinux_64.lds.SSam Ravnborg1-205/+243
Beautify vmlinux_64.lds.S: - Use tabs for indent - Located curly braces like in C code - Rearranged a few comments There is no functional changes in this patch The beautification is done to prepare a unification of the _32 and the _64 variants of the linker scripts. [ Impact: cleanup ] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20090426210742.GA3464@uranus.ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-17x86/brk: put the brk reservations in their own sectionJeremy Fitzhardinge1-1/+3
Impact: disambiguate real .bss variables from .brk storage Add a .brk section after the .bss section. This has no effect on the final vmlinux, but it more clearly distinguishes the space taken by actual .bss symbols, and the variable space reserved by .brk users. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-03-15x86: allow extend_brk users to reserve brk spaceJeremy Fitzhardinge1-1/+3
Impact: new interface; remove hard-coded limit Add RESERVE_BRK(name, size) macro to reserve space in the brk area. This should be a conservative (ie, larger) estimate of how much space might possibly be required from the brk area. Any unused space will be freed, so there's no real downside on making the reservation too large (within limits). The name should be unique within a given file, and somewhat descriptive. The C definition of RESERVE_BRK() ends up being more complex than one would expect to work around a cluster of gcc infelicities: The first attempt was to simply try putting __section(.brk_reservation) on a variable. This doesn't work because it ends up making it a @progbits section, which gets actual space allocated in the vmlinux executable. The second attempt was to emit the space into a section using asm, but gcc doesn't allow arguments to be passed to file-level asm() statements, making it hard to pass in the size. The final attempt is to wrap the asm() in a function to allow it to have arguments, and put the function itself into the .discard section, which vmlinux*.lds drops entirely from the emitted vmlinux. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-03-15x86: add brk allocation for very, very early allocationsJeremy Fitzhardinge1-0/+5
Impact: new interface Add a brk()-like allocator which effectively extends the bss in order to allow very early code to do dynamic allocations. This is better than using statically allocated arrays for data in subsystems which may never get used. The space for brk allocations is in the bss ELF segment, so that the space is mapped properly by the code which maps the kernel, and so that bootloaders keep the space free rather than putting a ramdisk or something into it. The bss itself, delimited by __bss_stop, ends before the brk area (__brk_base to __brk_limit). The kernel text, data and bss is reserved up to __bss_stop. Any brk-allocated data is reserved separately just before the kernel pagetable is built, as that code allocates from unreserved spaces in the e820 map, potentially allocating from any unused brk memory. Ultimately any unused memory in the brk area is used in the general kernel memory pool. Initially the brk space is set to 1MB, which is probably much larger than any user needs (the largest current user is i386 head_32.S's code to build the pagetables to map the kernel, which can get fairly large with a big kernel image and no PSE support). So long as the system has sufficient memory for the bootloader to reserve the kernel+1MB brk, there are no bad effects resulting from an over-large brk. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-03-15x86: make section delimiter symbols part of their sectionJeremy Fitzhardinge1-40/+45
Impact: cleanup Move the symbols delimiting a section part of the section (section relative) rather than absolute. This avoids any unexpected gaps between the section-start symbol and the first data in the section, which could be caused by implicit alignment of the section data. It also makes the general form of vmlinux_64.lds.S consistent with vmlinux_32.lds.S. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-03-11x86, kexec: x86_64: add kexec jump support for x86_64Huang Ying1-0/+7
Impact: New major feature This patch add kexec jump support for x86_64. More information about kexec jump can be found in corresponding x86_32 support patch. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-13x86: use _types.h headers in asm where availableJeremy Fitzhardinge1-1/+1
In general, the only definitions that assembly files can use are in _types.S headers (where available), so convert them. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-02-09x86: use linker to offset symbols by __per_cpu_loadBrian Gerst1-0/+8
Impact: cleanup and bug fix Use the linker to create symbols for certain per-cpu variables that are offset by __per_cpu_load. This allows the removal of the runtime fixup of the GDT pointer, which fixes a bug with resume reported by Jiri Slaby. Reported-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Brian Gerst <brgerst@gmail.com> Acked-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-03x86, percpu: fix kexec with vmlinuxYinghai Lu1-2/+3
Impact: fix regression with kexec with vmlinux Split data.init into data.init, percpu, data.init2 sections instead of let data.init wrap percpu secion. Thus kexec loading will be happy, because sections will not overlap. Before the patch we have: Elf file type is EXEC (Executable file) Entry point 0x200000 There are 6 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align LOAD 0x0000000000200000 0xffffffff80200000 0x0000000000200000 0x0000000000ca6000 0x0000000000ca6000 R E 200000 LOAD 0x0000000000ea6000 0xffffffff80ea6000 0x0000000000ea6000 0x000000000014dfe0 0x000000000014dfe0 RWE 200000 LOAD 0x0000000001000000 0xffffffffff600000 0x0000000000ff4000 0x0000000000000888 0x0000000000000888 RWE 200000 LOAD 0x00000000011f6000 0xffffffff80ff6000 0x0000000000ff6000 0x0000000000073086 0x0000000000a2d938 RWE 200000 LOAD 0x0000000001400000 0x0000000000000000 0x000000000106a000 0x00000000001d2ce0 0x00000000001d2ce0 RWE 200000 NOTE 0x00000000009e2c1c 0xffffffff809e2c1c 0x00000000009e2c1c 0x0000000000000024 0x0000000000000024 4 Section to Segment mapping: Segment Sections... 00 .text .notes __ex_table .rodata __bug_table .pci_fixup .builtin_fw __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param 01 .data .init.rodata .data.cacheline_aligned .data.read_mostly 02 .vsyscall_0 .vsyscall_fn .vsyscall_gtod_data .vsyscall_1 .vsyscall_2 .vgetcpu_mode .jiffies 03 .data.init_task .smp_locks .init.text .init.data .init.setup .initcall.init .con_initcall.init .x86_cpu_dev.init .altinstructions .altinstr_replacement .exit.text .init.ramfs .bss 04 .data.percpu 05 .notes After patch we've got: Elf file type is EXEC (Executable file) Entry point 0x200000 There are 7 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align LOAD 0x0000000000200000 0xffffffff80200000 0x0000000000200000 0x0000000000ca6000 0x0000000000ca6000 R E 200000 LOAD 0x0000000000ea6000 0xffffffff80ea6000 0x0000000000ea6000 0x000000000014dfe0 0x000000000014dfe0 RWE 200000 LOAD 0x0000000001000000 0xffffffffff600000 0x0000000000ff4000 0x0000000000000888 0x0000000000000888 RWE 200000 LOAD 0x00000000011f6000 0xffffffff80ff6000 0x0000000000ff6000 0x0000000000073086 0x0000000000073086 RWE 200000 LOAD 0x0000000001400000 0x0000000000000000 0x000000000106a000 0x00000000001d2ce0 0x00000000001d2ce0 RWE 200000 LOAD 0x000000000163d000 0xffffffff8123d000 0x000000000123d000 0x0000000000000000 0x00000000007e6938 RWE 200000 NOTE 0x00000000009e2c1c 0xffffffff809e2c1c 0x00000000009e2c1c 0x0000000000000024 0x0000000000000024 4 Section to Segment mapping: Segment Sections... 00 .text .notes __ex_table .rodata __bug_table .pci_fixup .builtin_fw __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param 01 .data .init.rodata .data.cacheline_aligned .data.read_mostly 02 .vsyscall_0 .vsyscall_fn .vsyscall_gtod_data .vsyscall_1 .vsyscall_2 .vgetcpu_mode .jiffies 03 .data.init_task .smp_locks .init.text .init.data .init.setup .initcall.init .con_initcall.init .x86_cpu_dev.init .altinstructions .altinstr_replacement .exit.text .init.ramfs 04 .data.percpu 05 .bss 06 .notes Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-20x86: move stack_canary into irq_stackBrian Gerst1-2/+6
Impact: x86_64 percpu area layout change, irq_stack now at the beginning Now that the PDA is empty except for the stack canary, it can be removed. The irqstack is moved to the start of the per-cpu section. If the stack protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack. tj: * updated subject * dropped asm relocation of irq_stack_ptr * updated comments a bit * rebased on top of stack canary changes Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16x86: convert pda ops to wrappers around x86 percpu accessorsTejun Heo1-1/+0
pda is now a percpu variable and there's no reason it can't use plain x86 percpu accessors. Add x86_test_and_clear_bit_percpu() and replace pda op implementations with wrappers around x86 percpu accessors. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: make pda a percpu variableTejun Heo1-1/+3
[ Based on original patch from Christoph Lameter and Mike Travis. ] As pda is now allocated in percpu area, it can easily be made a proper percpu variable. Make it so by defining per cpu symbol from linker script and declaring it in C code for SMP and simply defining it for UP. This change cleans up code and brings SMP and UP closer a bit. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: fold pda into percpu area on SMPTejun Heo1-2/+4
[ Based on original patch from Christoph Lameter and Mike Travis. ] Currently pdas and percpu areas are allocated separately. %gs points to local pda and percpu area can be reached using pda->data_offset. This patch folds pda into percpu area. Due to strange gcc requirement, pda needs to be at the beginning of the percpu area so that pda->stack_canary is at %gs:40. To achieve this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is added and used to reserve pda sized chunk at the start of the percpu area. After this change, for boot cpu, %gs first points to pda in the data.init area and later during setup_per_cpu_areas() gets updated to point to the actual pda. This means that setup_per_cpu_areas() need to reload %gs for CPU0 while clearing pda area for other cpus as cpu0 already has modified it when control reaches setup_per_cpu_areas(). This patch also removes now unnecessary get_local_pda() and its call sites. A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: make percpu symbols zerobased on SMPTejun Heo1-1/+16
[ Based on original patch from Christoph Lameter and Mike Travis. ] This patch makes percpu symbols zerobased on x86_64 SMP by adding PERCPU_VADDR() to vmlinux.lds.h which helps setting explicit vaddr on the percpu output section and using it in vmlinux_64.lds.S. A new PHDR is added as existing ones cannot contain sections near address zero. PERCPU_VADDR() also adds a new symbol __per_cpu_load which always points to the vaddr of the loaded percpu data.init region. The following adjustments have been made to accomodate the address change. * code to locate percpu gdt_page in head_64.S is updated to add the load address to the gdt_page offset. * __per_cpu_load is used in places where access to the init data area is necessary. * pda->data_offset is initialized soon after C code is entered as zero value doesn't work anymore. This patch is mostly taken from Mike Travis' "x86_64: Base percpu variables at zero" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-12tracing/function-graph-tracer: add a new .irqentry.text sectionFrederic Weisbecker1-0/+1
Impact: let the function-graph-tracer be aware of the irq entrypoints Add a new .irqentry.text section to store the irq entrypoints functions inside the same section. This way, the tracer will be able to signal an interrupts triggering on output by recognizing these entrypoints. Also, make this section recordable for dynamic tracing. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-12x86: fix early panic on amd64 due to typo in supported CPU sectionPetr Vandrovec1-1/+1
Do not crash when enumerating supported CPU architectures SECURITY_INIT somehow ended up in x86_cpu_dev.init section. That caused printk in code which prints supported architectures to hit #GP due to non-canonical address being used. Signed-off-by: Petr Vandrovec <petr@vandrovec.name> Cc: thomas.petazzoni@free-electrons.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04x86: remove cpu_vendor_devYinghai Lu1-5/+4
1. add c_x86_vendor into cpu_dev 2. change cpu_devs to static 3. check c_x86_vendor before put that cpu_dev into array 4. remove alignment for 64bit 5. order the sequence in cpu_devs according to link sequence... so could put intel at first, then amd... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-15Merge branch 'core/rodata' of ↵Linus Torvalds1-6/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core/rodata' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: move BUG_TABLE into RODATA
2008-07-08Merge branches 'x86/numa-fixes', 'x86/apic', 'x86/apm', 'x86/bitops', ↵Ingo Molnar1-7/+1
'x86/build', 'x86/cleanups', 'x86/cpa', 'x86/cpu', 'x86/defconfig', 'x86/gart', 'x86/i8259', 'x86/intel', 'x86/irqstats', 'x86/kconfig', 'x86/ldt', 'x86/mce', 'x86/memtest', 'x86/pat', 'x86/ptemask', 'x86/resumetrace', 'x86/threadinfo', 'x86/timers', 'x86/vdso' and 'x86/xen' into x86/devel
2008-07-08x86: make 64bit identify_cpu use cpu_dev v2Yinghai Lu1-0/+1
v2: fix early_panic on this config: http://redhat.com/~mingo/misc/config-Thu_Jun_19_14_22_37_CEST_2008.bad reason : struct cpu_vendor_dev size is 16, need to make table to be 16 byte alignment also print out the cpu supported... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Cc: Dave Jones <davej@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-25x86: move tracedata to RODATAJan Beulich1-7/+0
.. allowing it to be write-protected just as other read-only data under CONFIG_DEBUG_RODATA. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-25move BUG_TABLE into RODATAJan Beulich1-6/+4
Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-29x86_64 vDSO: use initdataRoland McGrath1-6/+0
The 64-bit vDSO image is in a special ".vdso" section for no reason I can determine. Furthermore, the location of the vdso_end symbol includes some wrongly-calculated padding space in the image, which is then (correctly) rounded to page size, resulting in an extra page of zeros in the image mapped in to user processes. This changes it to put the vdso.so image into normal initdata as we have always done for the 32-bit vDSO images. The extra padding is gone, so the user VMA is one page instead of two. The image that was already copied around at boot time is now in initdata, so we recover that wasted space after boot. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-17x86: use ELF section to list CPU vendor specific codeThomas Petazzoni1-0/+5
Replace the hardcoded list of initialization functions for each CPU vendor by a list in an ELF section, which is read at initialization in arch/x86/kernel/cpu/cpu.c to fill the cpu_devs[] array. The ELF section, named .x86cpuvendor.init, is reclaimed after boot, and contains entries of type "struct cpu_vendor_dev" which associates a vendor number with a pointer to a "struct cpu_dev" structure. This first modification allows to remove all the VENDOR_init_cpu() functions. This patch also removes the hardcoded calls to early_init_amd() and early_init_intel(). Instead, we add a "c_early_init" member to the cpu_dev structure, which is then called if not NULL by the generic CPU initialization code. Unfortunately, in early_cpu_detect(), this_cpu is not yet set, so we have to use the cpu_devs[] array directly. This patch is part of the Linux Tiny project, and is needed for further patch that will allow to disable compilation of unused CPU support code. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17x86: check vmlinux limits, 64-bitIngo Molnar1-0/+6
these build-time and link-time checks would have prevented the vmlinux size regression. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-19x86: lds - Use THREAD_SIZE instead of numeric constantCyrill Gorcunov1-1/+1
Though we use PDA for regular task stack but that is not acceptable for init_task wich is special one. We still have to allocate init_task's stack in that manner. Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-19x86: lds - Use PAGE_SIZE instead of numeric constantCyrill Gorcunov1-14/+14
It's much better to use PAGE_SIZE then magic 4096 (though it's almost synonym in most cases on x86 but not for *all* cases ;) Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: provide __parainstructions sectionGlauber de Oliveira Costa1-0/+8
This patch adds the __parainstructions section to vmlinux.lds.S. It's needed for the patching system. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86-64: clean up linker scriptJan Beulich1-8/+7
Remove the dead .text.lock. Move _etext and __{start,stop}___ex_table into their sections. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-29all archs: consolidate init and exit sections in vmlinux.lds.hSam Ravnborg1-6/+13
This patch consolidate all definitions of .init.text, .init.data and .exit.text, .exit.data section definitions in the generic vmlinux.lds.h. This is a preparational patch - alone it does not buy us much good. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-10-11x86_64: move kernelThomas Gleixner1-0/+235
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>