| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
should be updated.
Helped by: andre
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
amount of segments it will hold.
The following tuneables and sysctls control the behaviour of the tcp
segment reassembly queue:
net.inet.tcp.reass.maxsegments (loader tuneable)
specifies the maximum number of segments all tcp reassemly queues can
hold (defaults to 1/16 of nmbclusters).
net.inet.tcp.reass.maxqlen
specifies the maximum number of segments any individual tcp session queue
can hold (defaults to 48).
net.inet.tcp.reass.cursegments (readonly)
counts the number of segments currently in all reassembly queues.
net.inet.tcp.reass.overflows (readonly)
counts how often either the global or local queue limit has been reached.
Tested by: bms, silby
Reviewed by: bms, silby
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
address, even if we subsequently ignore its value by applying a >>8
to it.
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), {ume, suz} (KAME)
|
|
|
|
|
|
| |
would be changed in next patches, after extra verifications.
Approved by: imp (mentor)
|
|
|
|
| |
Approved by: scottl (mentor)
|
| |
|
|
|
|
|
|
|
|
|
|
| |
rollover resulting in duplicate keypress events.
PR: 57273
PR: 63171
Submitted by: plasma <plasma at freebsd.sinica.edu.tw>
Submitted by: Brian Candler <B.Candler at pobox.com>
MFC after: 1 week
|
|
|
|
| |
Submitted by: Jon Noack <noackjr@alumni.rice.edu>
|
|
|
|
|
|
|
|
| |
The nonstandard formatting made my mega-patch scripts miss it.
Retire the static major number while we're here anyway.
Reported by: Niels Chr. Bank-Pedersen <ncbp@bank-pedersen.dk>
|
|
|
|
|
|
| |
the cdevsw{}.
Submitted by: tegge
|
|
|
|
|
|
|
| |
AFTER the call to vn_start_write(), not before it. Otherwise, it is
possible to unlock it multiple times if the vn_start_write() fails.
Submitted by: Juergen Hannken-Illjes <hannken@eis.cs.tu-bs.de>
|
|
|
|
|
|
|
|
|
|
| |
In ufs_lock, check for attempts to acquire shared locks on
snapshot files and change them to be exclusive locks. This
change eliminates deadlocks and machine lockups reported in
-current since most read requests started using shared lock
requests.
Submitted by: Jun Kuriyama <kuriyama@imgsrc.co.jp>
|
|
|
|
|
|
|
| |
allocate via DRI on r128 devices.
Obtained from: Thomas Biege <thomas@suse.de>
Reviewed by: scottl
|
|
|
|
|
|
|
|
|
|
| |
swap_pager_putpages()'s buffer completion code. Note: the only
difference between swp_pager_sync_iodone() and bdone(), aside from
the locking in the latter, was the unnecessary clearing of B_ASYNC.
- Remove an unnecessary pmap_page_protect() from
swp_pager_async_iodone().
Reviewed by: tegge
|
|
|
|
| |
u_long ** not u_long *.
|
|
|
|
|
| |
Allocating it with the wrong size could have caused corruption on
64-bit architectures.
|
|
|
|
|
|
|
|
|
|
|
|
| |
of all, PIPE_EOF is not checked pervasively after everything that can drop
the pipe mutex and msleep(), so fix. Additionally, though it might not
harm anything, pipelock() and pipeunlock() are not used consistently.
Third, the kqueue support functions do not use the pipe mutex correctly.
Last, but absolutely not least, is a race: if pipe_busy is not set on
the closing side of the pipe, the other side that is trying to write to
that will crash BECAUSE PIPE_EOF IS NOT SET! Unconditionally set
PIPE_EOF, and get rid of all the lockups/crashes I have seen trying
to build ports.
|
| |
|
|
|
|
| |
and pid.
|
|
|
|
|
|
|
| |
an integer type and the a cast to (void *) was added in the
definition of NULL for the kernel, we need to use 0 here instead.
Partly submitted by: cperciva
|
|
|
|
|
|
|
|
|
|
|
| |
Now I believe it is done in the right way.
Removed some XXMAC cases, we now assume 'high' integrity level for all
sysctls, except those with CTLFLAG_ANYBODY flag set. No more magic.
Reviewed by: rwatson
Approved by: rwatson, scottl (mentor)
Tested with: LINT (compilation), mac_biba(4) (functionality)
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
|
|
| |
two free commands.
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), scottl
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), scottl
|
| |
|
|
|
|
|
|
|
| |
could result in a dirty page being unintentionally freed.
Reviewed by: tegge
MFC after: 7 days
|
|
|
|
|
|
|
|
|
| |
with a memory mapped I/O range that's immediately before it and is
not 256MB aligned. As a result, when an address is accessed in the
memory mapped range and a direct mapping is added for it, it overlaps
with the pre-mapped I/O port space and causes a machine check.
Based on a patch from: arun@
|
|
|
|
|
|
|
| |
free it again.
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), ken (scsi@)
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
|
|
| |
try to dereference logData.
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), scottl
|
|
|
|
|
|
|
| |
sizeof(struct foo *) bytes.
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor), scottl
|
|
|
|
|
| |
Reported by: "Ted Unangst" <tedu@coverity.com>
Approved by: rwatson (mentor)
|
|
|
|
|
|
|
|
| |
to use the "year1-year3" format, as opposed to "year1, year2, year3".
This seems to make lawyers more happy, but also prevents the
lines from getting excessively long as the years start to add up.
Suggested by: imp
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
different alignments due to the sse fxsave dump area.
|
|
|
|
| |
i386 version and were not merged over.
|
|
|
|
|
|
|
| |
of vm_pageout_flush(). Instead, assert that the page is still write
protected.
Discussed with: tegge
|
|
|
|
|
|
|
|
| |
idmap_add failure case (found by Ted Unangst via Colin Percival)
also convert idmap_hashf to return void, since it can't fail
also change some panics to error returns
|
|
|
|
|
|
|
| |
by 1 u_int if the number of clusters was 1 more than a multiple of
(8 * sizeof(u_int)). The bitmap is malloced and large (often huge), so
fatal overrun probably only occurred if the number of clusters was 1
more than 1 multiple of PAGE_SIZE/8.
|
|
|
|
| |
attrs on one or more entries
|
| |
|