diff options
author | dillon <dillon@FreeBSD.org> | 2001-02-04 06:19:28 +0000 |
---|---|---|
committer | dillon <dillon@FreeBSD.org> | 2001-02-04 06:19:28 +0000 |
commit | c8a95a285d887ca5e3be08f3c04f4efde25a5bf1 (patch) | |
tree | 9e2c267eac87bf24403df67147f7cc32082387c6 /sys/kern/vfs_bio.c | |
parent | bb46130566593c534d92d1f9327521daa1faa946 (diff) | |
download | FreeBSD-src-c8a95a285d887ca5e3be08f3c04f4efde25a5bf1.zip FreeBSD-src-c8a95a285d887ca5e3be08f3c04f4efde25a5bf1.tar.gz |
This commit represents work mainly submitted by Tor and slightly modified
by myself. It solves a serious vm_map corruption problem that can occur
with the buffer cache when block sizes > 64K are used. This code has been
heavily tested in -stable but only tested somewhat on -current. An MFC
will occur in a few days. My additions include the vm_map_simplify_entry()
and minor buffer cache boundry case fix.
Make the buffer cache use a system map for buffer cache KVM rather then a
normal map.
Ensure that VM objects are not allocated for system maps. There were cases
where a buffer map could wind up with a backing VM object -- normally
harmless, but this could also result in the buffer cache blocking in places
where it assumes no blocking will occur, possibly resulting in corrupted
maps.
Fix a minor boundry case in the buffer cache size limit is reached that
could result in non-optimal code.
Add vm_map_simplify_entry() calls to prevent 'creeping proliferation'
of vm_map_entry's in the buffer cache's vm_map. Previously only a simple
linear optimization was made. (The buffer vm_map typically has only a
handful of vm_map_entry's. This stabilizes it at that level permanently).
PR: 20609
Submitted by: (Tor Egge) tegge
Diffstat (limited to 'sys/kern/vfs_bio.c')
-rw-r--r-- | sys/kern/vfs_bio.c | 19 |
1 files changed, 13 insertions, 6 deletions
diff --git a/sys/kern/vfs_bio.c b/sys/kern/vfs_bio.c index 14ae4a8..a0d693c 100644 --- a/sys/kern/vfs_bio.c +++ b/sys/kern/vfs_bio.c @@ -1235,9 +1235,8 @@ brelse(struct buf * bp) bufcountwakeup(); /* - * Something we can maybe free. + * Something we can maybe free or reuse */ - if (bp->b_bufsize || bp->b_kvasize) bufspacewakeup(); @@ -1304,7 +1303,7 @@ bqrelse(struct buf * bp) } /* - * Something we can maybe wakeup + * Something we can maybe free or reuse. */ if (bp->b_bufsize && !(bp->b_flags & B_DELWRI)) bufspacewakeup(); @@ -1551,10 +1550,13 @@ restart: } /* - * Nada. If we are allowed to allocate an EMPTY - * buffer, go get one. + * If we could not find or were not allowed to reuse a + * CLEAN buffer, check to see if it is ok to use an EMPTY + * buffer. We can only use an EMPTY buffer if allocating + * its KVA would not otherwise run us out of buffer space. */ - if (nbp == NULL && defrag == 0 && bufspace < hibufspace) { + if (nbp == NULL && defrag == 0 && + bufspace + maxsize < hibufspace) { nqindex = QUEUE_EMPTY; nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTY]); } @@ -1686,6 +1688,11 @@ restart: goto restart; } + /* + * If we are overcomitted then recover the buffer and its + * KVM space. This occurs in rare situations when multiple + * processes are blocked in getnewbuf() or allocbuf(). + */ if (bufspace >= hibufspace) flushingbufs = 1; if (flushingbufs && bp->b_kvasize != 0) { |