diff options
author | alc <alc@FreeBSD.org> | 2012-03-22 04:52:51 +0000 |
---|---|---|
committer | alc <alc@FreeBSD.org> | 2012-03-22 04:52:51 +0000 |
commit | e02fd6b8423e63f1fdbfc1f984d7c7291a1bacd1 (patch) | |
tree | 6628d621fc3b1f91d03b83dcd89b48528e3132fe /sys/vm/vm_fault.c | |
parent | 1c754fb497b9c43672483c1ed5abe4f9176325cd (diff) | |
download | FreeBSD-src-e02fd6b8423e63f1fdbfc1f984d7c7291a1bacd1.zip FreeBSD-src-e02fd6b8423e63f1fdbfc1f984d7c7291a1bacd1.tar.gz |
Handle spurious page faults that may occur in no-fault sections of the
kernel.
When access restrictions are added to a page table entry, we flush the
corresponding virtual address mapping from the TLB. In contrast, when
access restrictions are removed from a page table entry, we do not
flush the virtual address mapping from the TLB. This is exactly as
recommended in AMD's documentation. In effect, when access
restrictions are removed from a page table entry, AMD's MMUs will
transparently refresh a stale TLB entry. In short, this saves us from
having to perform potentially costly TLB flushes. In contrast,
Intel's MMUs are allowed to generate a spurious page fault based upon
the stale TLB entry. Usually, such spurious page faults are handled
by vm_fault() without incident. However, when we are executing
no-fault sections of the kernel, we are not allowed to execute
vm_fault(). This change introduces special-case handling for spurious
page faults that occur in no-fault sections of the kernel.
In collaboration with: kib
Tested by: gibbs (an earlier version)
I would also like to acknowledge Hiroki Sato's assistance in
diagnosing this problem.
MFC after: 1 week
Diffstat (limited to 'sys/vm/vm_fault.c')
-rw-r--r-- | sys/vm/vm_fault.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/sys/vm/vm_fault.c b/sys/vm/vm_fault.c index 8c98b26..8477c35 100644 --- a/sys/vm/vm_fault.c +++ b/sys/vm/vm_fault.c @@ -1468,11 +1468,17 @@ vm_fault_additional_pages(m, rbehind, rahead, marray, reqpage) return i; } +/* + * Block entry into the machine-independent layer's page fault handler by + * the calling thread. Subsequent calls to vm_fault() by that thread will + * return KERN_PROTECTION_FAILURE. Enable machine-dependent handling of + * spurious page faults. + */ int vm_fault_disable_pagefaults(void) { - return (curthread_pflags_set(TDP_NOFAULTING)); + return (curthread_pflags_set(TDP_NOFAULTING | TDP_RESETSPUR)); } void |