diff options
author | Bjorn Helgaas <bjorn.helgaas@hp.com> | 2006-05-06 03:19:50 +0400 |
---|---|---|
committer | Tony Luck <tony.luck@intel.com> | 2006-05-09 03:32:05 +0400 |
commit | 32e62c636a728cb39c0b3bd191286f2ca65d4028 (patch) | |
tree | 656454a01e720819103c172daae15b5f2fd85d68 /arch/ia64/pci | |
parent | 6810b548b25114607e0814612d84125abccc0a4f (diff) | |
download | linux-32e62c636a728cb39c0b3bd191286f2ca65d4028.tar.xz |
[IA64] rework memory attribute aliasing
This closes a couple holes in our attribute aliasing avoidance scheme:
- The current kernel fails mmaps of some /dev/mem MMIO regions because
they don't appear in the EFI memory map. This keeps X from working
on the Intel Tiger box.
- The current kernel allows UC mmap of the 0-1MB region of
/sys/.../legacy_mem even when the chipset doesn't support UC
access. This causes an MCA when starting X on HP rx7620 and rx8620
boxes in the default configuration.
There's more detail in the Documentation/ia64/aliasing.txt file this
adds, but the general idea is that if a region might be covered by
a granule-sized kernel identity mapping, any access via /dev/mem or
mmap must use the same attribute as the identity mapping.
Otherwise, we fall back to using an attribute that is supported
according to the EFI memory map, or to using UC if the EFI memory
map doesn't mention the region.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Diffstat (limited to 'arch/ia64/pci')
-rw-r--r-- | arch/ia64/pci/pci.c | 17 |
1 files changed, 15 insertions, 2 deletions
diff --git a/arch/ia64/pci/pci.c b/arch/ia64/pci/pci.c index ab829a22f8a4..30d148f34042 100644 --- a/arch/ia64/pci/pci.c +++ b/arch/ia64/pci/pci.c @@ -645,18 +645,31 @@ char *ia64_pci_get_legacy_mem(struct pci_bus *bus) int pci_mmap_legacy_page_range(struct pci_bus *bus, struct vm_area_struct *vma) { + unsigned long size = vma->vm_end - vma->vm_start; + pgprot_t prot; char *addr; + /* + * Avoid attribute aliasing. See Documentation/ia64/aliasing.txt + * for more details. + */ + if (!valid_mmap_phys_addr_range(vma->vm_pgoff << PAGE_SHIFT, size)) + return -EINVAL; + prot = phys_mem_access_prot(NULL, vma->vm_pgoff, size, + vma->vm_page_prot); + if (pgprot_val(prot) != pgprot_val(pgprot_noncached(vma->vm_page_prot))) + return -EINVAL; + addr = pci_get_legacy_mem(bus); if (IS_ERR(addr)) return PTR_ERR(addr); vma->vm_pgoff += (unsigned long)addr >> PAGE_SHIFT; - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + vma->vm_page_prot = prot; vma->vm_flags |= (VM_SHM | VM_RESERVED | VM_IO); if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, - vma->vm_end - vma->vm_start, vma->vm_page_prot)) + size, vma->vm_page_prot)) return -EAGAIN; return 0; |