| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
should be placed before a routing header, unless a routing header
really exists.
Obtained from: KAME
|
|
|
|
|
|
| |
device ID list, probably a 5705 ASIC).
Submitted by: Marcel Prisi <marcel@virtua.ch>
|
|
|
|
|
|
|
|
| |
mbuf cluster, copy the data to a separate mbuf that do not use a
cluster. this change will reduce the possiblity of packet loss
in the socket layer.
Obtained from: KAME
|
|
|
|
| |
Obtained from: KAME
|
|
|
|
| |
Obtained from: KAME
|
|
|
|
| |
Obtained from: KAME
|
|
|
|
|
|
| |
in6_nigroup_detach().
Obtained from: KAME
|
| |
|
|
|
|
|
|
| |
- Miscellaneous style fixes in the f00f hack code and some nearby code.
Submitted by: bde
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
256 raw receive buffers (96 byte each) fit into one page. This breaks the
limit imposed by the usage of an uint8_t for the buffer number. Restrict
the allocation size for buffers to a maximum of 8192.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add an IPI based mechanism for migrating kses. This mechanism is
broken down into several components. This is intended to reduce cache
thrashing by eliminating most cases where one cpu touches another's
run queues.
- kseq_notify() appends a kse to a lockless singly linked list and
conditionally sends an IPI to the target processor. Right now this is
protected by sched_lock but at some point I'd like to get rid of the
global lock. This is why I used something more complicated than a
standard queue.
- kseq_assign() processes our list of kses that have been assigned to us
by other processors. This simply calls sched_add() for each item on the
list after clearing the new KEF_ASSIGNED flag. This flag is used to
indicate that we have been appeneded to the assigned queue but not
added to the run queue yet.
- In sched_add(), instead of adding a KSE to another processor's queue we
use kse_notify() so that we don't touch their queue. Also in sched_add(),
if KEF_ASSIGNED is already set return immediately. This can happen if
a thread is removed and readded so that the priority is recorded properly.
- In sched_rem() return immediately if KEF_ASSIGNED is set. All callers
immediately readd simply to adjust priorites etc.
- In sched_choose(), if we're running an IDLE task or the per cpu idle thread
set our cpumask bit in 'kseq_idle' so that other processors may know that
we are idle. Before this, make a single pass through the run queues of
other processors so that we may find work more immediately if it is
available.
- In sched_runnable(), don't scan each processor's run queue, they will IPI
us if they have work for us to do.
- In sched_add(), if we're adding a thread that can be migrated and we have
plenty of work to do, try to migrate the thread to an idle kseq.
- Simplify the logic in sched_prio() and take the KEF_ASSIGNED flag into
consideration.
- No longer use kseq_choose() to steal threads, it can lose it's last
argument.
- Create a new function runq_steal() which operates like runq_choose() but
skips threads based on some criteria. Currently it will not steal
PRI_ITHD threads. In the future this will be used for CPU binding.
- Create a kseq_steal() that checks each run queue with runq_steal(), use
kseq_steal() in the places where we used kseq_choose() to steal with
before.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the rstack functionality:
1. Fix a KASSERT that tests for the address to be above the upward
growable stack. Typically for rstack, the faulting address can be
identical to the record end of the upward growable entry, and
very likely is on ia64. The KASSERT tested for greater than, not
greater equal, so whenever the register stack had to be grown
the assertion fired.
2. When we grow the upward growable stack entry and adjust the
unlying object, don't forget to adjust the size of the VM map.
Not doing so would trigger an assert in vm_mapzdtor().
Pointy hat: marcel (for not testing with INVARIANTS).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
those cylinder groups that have at least 75% of the average free
space per cylinder group for that file system are considered as
candidates for the creation of a new directory. The previous formula
for minbfree would set it to zero if the file system was more than
75% full, which allowed cylinder groups with no free space at all
to be chosen as candidates for directory creation, which resulted
in an expensive search for free blocks for each file that was
subsequently created in that directory.
Modify the calculation of minifree in the same way.
Decrease maxcontigdirs as the file system fills to decrease the
likelyhood that a cluster of directories will overflow the available
space in a cylinder group.
Reviewed by: mckusick
Tested by: kmarx@vicor.com
MFC after: 2 weeks
|
|
|
|
|
|
|
|
| |
that libtool-using packages seem to love using this flag.
/usr/include/sys/cdefs.h:184:5: warning: "__STDC_VERSION__" is not defined
/usr/include/sys/cdefs.h:372:5: warning: "_POSIX_C_SOURCE" is not defined
/usr/include/sys/cdefs.h:378:5: warning: "_POSIX_C_SOURCE" is not defined
|
|
|
|
| |
- Remove several instances of GIANT_REQUIRED.
|
|
|
|
|
| |
Invalid OHCI version indicates OHCI registers are not mapped
correctly in PCI or CardBus layer.
|
| |
|
| |
|
| |
|
|
|
|
| |
get the softc.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
routine that takes a locked routing table reference and removes all
references to the entry in the various data structures. This
eliminates instances of recursive locking and also closes races
where the lock on the entry had to be dropped prior to calling
rtrequest(RTM_DELETE). This also cleans up confusion where the
caller held a reference to an entry that might have been reclaimed
(and in some cases used that reference).
Supported by: FreeBSD Foundation
|
| |
|
|
|
|
| |
Supported by: FreeBSD Foundation
|
|
|
|
|
| |
pmap == kernel_pmap rather than pmap->pm_active == -1. gcc's inliner
can remove more code that way. Only kernel_pmap has a pm_active of -1.
|
|
|
|
| |
order to keep all of the opt_pmap.h options together.
|
| |
|
| |
|
|
|
|
| |
machine/segments.h.
|
|
|
|
|
|
|
| |
without a detach call in between so don't try to deal with that
possiability.
This is a diff-reduction commit for the upcoming if_xname conversion.
|
| |
|
|
|
|
| |
- don't call malloc with M_WAITOK within lock context.
|
|
|
|
| |
always shoould entered with mutex locked.
|
|
|
|
|
| |
while beeing not safe in the general case. Thanks to David Schultz
<das@freebsd.org> for help.
|
|
|
|
| |
the same effect as ACPI_NO_RESET_VIDEO kernel option.
|
|
|
|
|
|
| |
RFC3484.
Obtained from: KAME
|
|
|
|
| |
Cleanup the SATA support a bit now we are here anyways.
|
|
|
|
|
|
|
|
| |
are now in the header of the external buffer itself which allows us
to manipulate them in the free routine without having to lock the softc
structure or the free list. To get space for these flags the chunk number
is reduced to 8 bit which amounts to a maximum of 256 chunks per allocated
page. This restriction is now enforced by a CTASSERT.
|
|
|
|
| |
- Remove several instances of GIANT_REQUIRED.
|
|
|
|
|
| |
- Use swp_sizecheck() rather than assignment to swap_pager_full in
swaponsomething().
|
| |
|
| |
|
|
|
|
|
| |
blocks in page fault hanlder, and upcall thread can be scheduled. It is
useful if process is doing lots of mmap based I/O.
|