Skip to content

Commit b38af47

Browse files
Hugh Dickinstorvalds
Hugh Dickins
authored andcommitted
x86,mm: fix pte_special versus pte_numa
Sasha Levin has shown oopses on ffffea0003480048 and ffffea0003480008 at mm/memory.c:1132, running Trinity on different 3.16-rc-next kernels: where zap_pte_range() checks page->mapping to see if PageAnon(page). Those addresses fit struct pages for pfns d2001 and d2000, and in each dump a register or a stack slot showed d2001730 or d2000730: pte flags 0x730 are PCD ACCESSED PROTNONE SPECIAL IOMAP; and Sasha's e820 map has a hole between cfffffff and 100000000, which would need special access. Commit c46a7c8 ("x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels") has broken vm_normal_page(): a PROTNONE SPECIAL pte no longer passes the pte_special() test, so zap_pte_range() goes on to try to access a non-existent struct page. Fix this by refining pte_special() (SPECIAL with PRESENT or PROTNONE) to complement pte_numa() (SPECIAL with neither PRESENT nor PROTNONE). A hint that this was a problem was that c46a7c8 added pte_numa() test to vm_normal_page(), and moved its is_zero_pfn() test from slow to fast path: This was papering over a pte_special() snag when the zero page was encountered during zap. This patch reverts vm_normal_page() to how it was before, relying on pte_special(). It still appears that this patch may be incomplete: aren't there other places which need to be handling PROTNONE along with PRESENT? For example, pte_mknuma() clears _PAGE_PRESENT and sets _PAGE_NUMA, but on a PROT_NONE area, that would make it pte_special(). This is side-stepped by the fact that NUMA hinting faults skipped PROT_NONE VMAs and there are no grounds where a NUMA hinting fault on a PROT_NONE VMA would be interesting. Fixes: c46a7c8 ("x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels") Reported-by: Sasha Levin <[email protected]> Tested-by: Sasha Levin <[email protected]> Signed-off-by: Hugh Dickins <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: <[email protected]> [3.16] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 7ea8574 commit b38af47

File tree

2 files changed

+10
-6
lines changed

2 files changed

+10
-6
lines changed

arch/x86/include/asm/pgtable.h

+7-2
Original file line numberDiff line numberDiff line change
@@ -131,8 +131,13 @@ static inline int pte_exec(pte_t pte)
131131

132132
static inline int pte_special(pte_t pte)
133133
{
134-
return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) ==
135-
(_PAGE_PRESENT|_PAGE_SPECIAL);
134+
/*
135+
* See CONFIG_NUMA_BALANCING pte_numa in include/asm-generic/pgtable.h.
136+
* On x86 we have _PAGE_BIT_NUMA == _PAGE_BIT_GLOBAL+1 ==
137+
* __PAGE_BIT_SOFTW1 == _PAGE_BIT_SPECIAL.
138+
*/
139+
return (pte_flags(pte) & _PAGE_SPECIAL) &&
140+
(pte_flags(pte) & (_PAGE_PRESENT|_PAGE_PROTNONE));
136141
}
137142

138143
static inline unsigned long pte_pfn(pte_t pte)

mm/memory.c

+3-4
Original file line numberDiff line numberDiff line change
@@ -751,7 +751,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
751751
unsigned long pfn = pte_pfn(pte);
752752

753753
if (HAVE_PTE_SPECIAL) {
754-
if (likely(!pte_special(pte) || pte_numa(pte)))
754+
if (likely(!pte_special(pte)))
755755
goto check_pfn;
756756
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
757757
return NULL;
@@ -777,15 +777,14 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
777777
}
778778
}
779779

780+
if (is_zero_pfn(pfn))
781+
return NULL;
780782
check_pfn:
781783
if (unlikely(pfn > highest_memmap_pfn)) {
782784
print_bad_pte(vma, addr, pte, NULL);
783785
return NULL;
784786
}
785787

786-
if (is_zero_pfn(pfn))
787-
return NULL;
788-
789788
/*
790789
* NOTE! We still have PageReserved() pages in the page tables.
791790
* eg. VDSO mappings can cause them to exist.

0 commit comments

Comments
 (0)