summaryrefslogtreecommitdiffstats
path: root/sys
Commit message (Collapse)AuthorAgeFilesLines
* - Fixes a case where doing a sysctl would leave locks heldrrs2007-06-062-1/+13
| | | | | | when coping out association data. - Fixes a small bug that prevented the SCTP_UNORDERED indication from going up to the app on a recv in the sinfo_flags field.
* Add more IDs for the uftdi driver. Slight tweaks to patch by me.imp2007-06-052-1/+17
| | | | | Submitted by: Thorsten Trampisch PR: 113384
* - Do triple reads on reset register to detect read register bug. 2 readsariff2007-06-051-12/+15
| | | | | | | seems not enough to verify its consistencies. - Define AC97_MIXER_SIZE as SOUND_MIXER_NRDEVICES (25), since we don't need more than that. Stop doing wild and random guess about its size since we're stricly bound to it.
* Fix (enable) phone out for laptops with ALC655, specificallyariff2007-06-051-0/+9
| | | | | | | | for Amilo Pro V2055. PR: kern/113101 Submitted by: konrad@egipt-medytacje.pl MFC after: 3 days
* Move a warning under bootverbose as no machines that trigger it have endedjhb2007-06-052-2/+2
| | | | up being broken.
* Fix a problem with not-preemptive kernels caming from mis-merging ofattilio2007-06-051-47/+0
| | | | | | | existing code with the new thread_lock patch. This also cleans up a bit unlock operation for mutexes. Approved by: jhb, jeff(mentor)
* MFp4: When querying the operating condition of SD cards (using theimp2007-06-051-1/+1
| | | | | | | | | | | | application specific SEND_OP_COND (CMD55 + ACMD41), go ahead and allow 100 tries. This gives a timeout of a second rather than the ~100ms the old style produces. I've had one old 16MB SD card which needs the extra time. I've now had reports from the field that other cards need this too. Originally done at BSDcan 2007 while waiting to give my embedding madness minitalk.
* Use pmap_change_attr() to setup a write combine attribute for ourgallatin2007-06-051-1/+14
| | | | | | | device memory, rather than relying on the less reliable MTRR method used by mem_range_attr_set(). Glanced at by: jhb
* Restore non-SMP build.kib2007-06-051-1/+2
| | | | Reviewed by: attilio
* Remove GIANT_REQUIRED for upcoming changes in FireWire stack.simokawa2007-06-051-4/+0
|
* MFi386: revision 1.656nyan2007-06-051-1/+7
| | | | | | | | Add the machine-specific definitions for configuring the new physical memory allocator. Set the size of phys_avail[] and dump_avail[] using one of these definitions.
* Add the machine-specific definitions for configuring the new physicalalc2007-06-052-1/+44
| | | | | | | | | memory allocator. Set the size of phys_avail[] and dump_avail[] using one of these definitions. Approved by: re
* Satisfy witness during shutdownscottl2007-06-051-0/+2
|
* - Better fix for previous error; use DEVOLATILE on the td_lock pointerjeff2007-06-052-2/+2
| | | | | | | it can actually sometimes be something other than sched_lock even on schedulers which rely on a global scheduler lock. Tested by: kan
* - Pass &sched_lock as the third argument to cpu_switch() as this willjeff2007-06-052-2/+2
| | | | | | | always be the correct lock and we don't get volatile warnings this way. Pointed out by: kan
* - Define TDQ_ID() for the !SMP case.jeff2007-06-051-1/+2
| | | | - Default pick_pri to off. It is not faster in most cases.
* - Added a new Ethernet media type (2500BaseSX) to support BCM5708 controllersdavidch2007-06-051-0/+4
| | | | | | | | which support a 2.5Gbps mode over fiber using next page extensions during autonegotiation. Typically only found in blade systems which also include a Broadcom 2.5Gbps capable switch. MFC after: 2 weeks
* - Add a new argument to cpu_switch. This is a pointer to a mutex thatjeff2007-06-051-14/+27
| | | | | | | | | | | | | oldthread should point at before we return. - When cpu_switch() is called the td_lock pointer in the old thread may point at the blocked lock. This prevents other processors from switching into this thread while we're still switching out. Wait until we're done deactivating the vmspace before we release the thread by assigning to td_lock. - Before we can activate the new vmspace we must make sure that the new thread is not assigned to the blocked lock. It may be in the process of switching out on another cpu. Spin until the new thread is available.
* - Expose td_lock to assembly so it may be used in cpu_switch().jeff2007-06-051-0/+1
|
* - Remove sched_core.c. The maintainer has lost interest in pursuing thisjeff2007-06-054-1809/+0
| | | | | | | and it has been neglected in the recent ksegrp removal as well as the thread_lock() changes. Discussed with: davidxu
* Commit 14/14 of sched_lock decomposition.jeff2007-06-0550-318/+373
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 13/14 of sched_lock decomposition.jeff2007-06-041-1/+1
| | | | | | | | | | | | | - Add a new parameter to cpu_switch() that is used to release the lock on the outgoing thread and properly acquire the lock on the incoming thread. This parameter is not required for schedulers that don't do per-cpu locking and architectures which do not support it may continue to use the 4BSD scheduler. This feature is presently not supported on ia64 Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* - Change comments and asserts to reflect the removal of the globaljeff2007-06-0411-20/+19
| | | | | | | | scheduler lock. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 11/14 of sched_lock decomposition.jeff2007-06-042-60/+0
| | | | | | | | | | - There is no globally visible scheduler lock any longer. For now the watchdog can only check Giant. This model of checking particular locks is flawed and should be revisited. Other metrics should be considered. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 10/14 of sched_lock decomposition.jeff2007-06-045-76/+16
| | | | | | | | | | | | - Use sched_throw() rather than replicating the same cpu_throw() code for each architecture. This also allows the scheduler to use any locking it may want to. - Use the thread_lock() rather than sched_lock when preempting. - The scheduler lock is not required to synchronize release_aps. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 10/14 of sched_lock decomposition.jeff2007-06-041-6/+11
| | | | | | | | - Add new spinlocks to support thread_lock() and adjust ordering. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 9/14 of sched_lock decomposition.jeff2007-06-041-57/+117
| | | | | | | | | | - Attempt to return the ttyinfo() selection algorithm to something sane as it has been broken and disabled for some time. Adapt this algorithm in such a way that it does not conflict with per-cpu scheduler locking. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 8/14 of sched_lock decomposition.jeff2007-06-041-33/+52
| | | | | | | | | | - Use a global umtx spinlock to protect the sleep queues now that there is no global scheduler lock. - Use thread_lock() to protect thread state. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 7/14 of sched_lock decomposition.jeff2007-06-041-58/+82
| | | | | | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. - Use a global kse spinlock to protect upcall and thread assignment. The per-process spinlock can not be used because this lock must be acquired via mi_switch() where we already hold a thread lock. The kse spinlock is a leaf lock ordered after the process and thread spinlocks. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 6/14 of sched_lock decomposition.jeff2007-06-041-30/+14
| | | | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. - Replace the tail-end of fork_exit() with a scheduler specific routine which can do the appropriate lock manipulations. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 5/14 of sched_lock decomposition.jeff2007-06-041-35/+33
| | | | | | | | | | | | | | | - Protect the cp_time tick counts with atomics instead of a global lock. There will only be one atomic per tick and this allows all processors to execute softclock concurrently. - In softclock, protect access to rusage and td_*tick data with the thread_lock(), expanding the scope of the thread lock over the whole function. - Do some creative re-arranging in hardclock() to avoid excess locking. - Protect the p_timer fields with the per-process spinlock. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 4/14 of sched_lock decomposition.jeff2007-06-042-129/+163
| | | | | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. - Move some common code into thread_suspend_switch() to handle the mechanics of suspending a thread. The locking here is incredibly convoluted and should be simplified. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 3/14 of sched_lock decomposition.jeff2007-06-044-192/+305
| | | | | | | | | | | | | | | | | - Add a per-turnstile spinlock to solve potential priority propagation deadlocks that are possible with thread_lock(). - The turnstile lock order is defined as the exact opposite of the lock order used with the sleep locks they represent. This allows us to walk in reverse order in priority_propagate and this is the only place we wish to multiply acquire turnstile locks. - Use the turnstile_chain lock to protect assigning mutexes to turnstiles. - Change the turnstile interface to pass back turnstile pointers to the consumers. This allows us to reduce some locking and makes it easier to cancel turnstile assignment while the turnstile chain lock is held. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 2/14 of sched_lock decomposition.jeff2007-06-043-115/+106
| | | | | | | | | | | | | | | | | - Adapt sleepqueues to the new thread_lock() mechanism. - Delay assigning the sleep queue spinlock as the thread lock until after we've checked for signals. It is illegal for a thread to return in mi_switch() with any lock assigned to td_lock other than the scheduler locks. - Change sleepq_catch_signals() to do the switch if necessary to simplify the callers. - Simplify timeout handling now that locking a sleeping thread has the side-effect of locking the sleepqueue. Some previous races are no longer possible. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 1/14 of sched_lock decomposition.jeff2007-06-047-164/+460
| | | | | | | | | | | | | | | | | | | | | | | | | | - Move all scheduler locking into the schedulers utilizing a technique similar to solaris's container locking. - A per-process spinlock is now used to protect the queue of threads, thread count, suspension count, p_sflags, and other process related scheduling fields. - The new thread lock is actually a pointer to a spinlock for the container that the thread is currently owned by. The container may be a turnstile, sleepqueue, or run queue. - thread_lock() is now used to protect access to thread related scheduling fields. thread_unlock() unlocks the lock and thread_set_lock() implements the transition from one lock to another. - A new "blocked_lock" is used in cases where it is not safe to hold the actual thread's lock yet we must prevent access to the thread. - sched_throw() and sched_fork_exit() are introduced to allow the schedulers to fix-up locking at these points. - Add some minor infrastructure for optionally exporting scheduler statistics that were invaluable in solving performance problems with this patch. Generally these statistics allow you to differentiate between different causes of context switches. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Do proper "locking" for missing vmmeters part.attilio2007-06-0410-34/+44
| | | | | | | | Now, we assume no more sched_lock protection for some of them and use the distribuited loads method for vmmeter (distribuited through CPUs). Reviewed by: alc, bde Approved by: jeff (mentor)
* Rework the PCPU_* (MD) interface:attilio2007-06-0424-48/+99
| | | | | | | | | | | | - Rename PCPU_LAZY_INC into PCPU_INC - Add the PCPU_ADD interface which just does an add on the pcpu member given a specific value. Note that for most architectures PCPU_INC and PCPU_ADD are not safe. This is a point that needs some discussions/work in the next days. Reviewed by: alc, bde Approved by: jeff (mentor)
* Despite several examples in the kernel, the third argument ofdwmalone2007-06-0440-77/+77
| | | | | | | | | | | | | sysctl_handle_int is not sizeof the int type you want to export. The type must always be an int or an unsigned int. Remove the instances where a sizeof(variable) is passed to stop people accidently cut and pasting these examples. In a few places this was sysctl_handle_int was being used on 64 bit types, which would truncate the value to be exported. In these cases use sysctl_handle_quad to export them and change the format to Q so that sysctl(1) can still print them.
* Add a function for exporting 64 bit types.dwmalone2007-06-042-0/+26
|
* Revert to the previous version where the return value of uart_getenv()marcel2007-06-041-1/+2
| | | | | is being ignored. It's optional and the lack of environment variable is not an error condition.
* Add in a couple of things:ambrisko2007-06-042-19/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - In the ioctl path let command get queued up and return when complete _without_ blocking the driving waiting for the response. This way the driver doesn't "lock up" for ~30s during a flash command. Submitted by scottl. - Add a guard so that if a DCMD of 0 is sent down the ioctl path don't send it to the controller. Return with a status of OK. This is a little strange since MegaCli doesn't seem to like something and will issue some DCMD of 0. This doesn't happen under Linux. So the emulation needs to be improved but I'm not sure what. Another strange thing is that when a DCMD of 0 gets issued under i386 the controller returns OK but in amd64 the context is messed up. - Add a guard so the context has to be with-in the legal limit so we get a reasonable error assertion versus random panic. It's going to be a challenge to figure out why MegaCli is not totally happy and then sends some bogus commands. This means that flashing firmware via the Linux tool won't work since it generates a DCMD of 0 when it should be opening the firmware for a flash update. Without this problem flashing works fine. This means there is no publicly available tool to upgrade the RAID firmware under FreeBSD right now. I plan to MFC all of the mfi changes to 6.X shortly. This might not include the SCSI pass-through changes. Submitted by: scottl Reviewed by: scottl MFC after: 3 days
* No need to update link queue stats when round-robin algorithm enabled.mav2007-06-041-9/+10
| | | | Approved by: glebius (mentor)
* Reimplement traverse() helper function:pjd2007-06-046-40/+46
| | | | | | | | | | | | 1. Pass locking flags to VFS_ROOT(). 2. Check v_mountedhere while the vnode is locked. 3. Always return locked vnode on success. Change 1 fixes problem reported by Stephen M. Rumble - after zfs_vfsops.c,1.9 change, zfs_root() no longer locks the vnode unconditionally and traverse() didn't pass right lock type to VFS_ROOT(). The result was that kernel paniced when .zfs/ directory was accessed via NFS.
* Now that tone & delay times are correct (independent of hz), adjustbrian2007-06-041-2/+2
| | | | | | | | | playtone() so that it uses times of 1/100ths of a second. Now 'time echo T60ABC >/dev/speaker' takes ~3 seconds. MFC after: 2 weeks Problem noted by: dwmalone
* Speaker durations are specified in 1/100ths of a second according tobrian2007-06-041-15/+19
| | | | | | | | spkr(4). PR: 70610, 67995 Submitted by: dada at sbox dot tugraz dot at (modulo one fix) MFC after: 2 weeks
* Add the machine-specific definitions for configuring the new physicalalc2007-06-041-0/+15
| | | | | | memory allocator. Approved by: re
* Track an update in the MPI headers that was missed earlier.scottl2007-06-041-1/+1
|
* cleanup about the reassembly structures and routine:jinmei2007-06-042-25/+16
| | | | | | | | - removed unused structure members - fixed a minor bug that the ECN code point may not be restored correctly Approved by: ume (mentor) MFC after: 1 week
* o Implemented Rx/Tx checksum offload. The simple checksum logic inyongari2007-06-043-224/+420
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GEMs is unable to discriminate UDP from TCP packets such that it can generate 0x0000 checksum value for the UDP datagram. So the UDP checksum offload was disabled by default. You can enable it by setting link0 flag with ifconfig(8). o bus_dma(9) clean up. It now correctly set number of required DMA segments/size and removed incorrect use of BUS_DMA_ALLOCNOW flag in static allocations done via bus_dmamem_alloc(9). o Implemented ALTQ(9) support. o Implemented Tx side bus_dmamap_load_mbuf_sg(9) which can remove several book keeping chores orginated from call-back mechanism. Therefore gem_txdma_callback() was removed and its functionality was reimplemented in gem_load_txmbuf(). o Don't set GEM_TD_START_OF_PACKET flag until all remaining mbuf chains are set. I think it was a long standing bug and it caused fluctuating interrupts/CPU usage patterns while netperf test is in progress. Previously it seems that we race with the device. Because I don't have a documentation for GEM I'm not sure this is correct but almost all other documentations I have stated this implications on setting SOP mark in descriptor ring(e.g. hme(4)). o Borrowed gem_defrag() from ath(4) which is supposed to be much faster than m_defrag(9) since it's not need to defrag all mbuf chains. o gem_load_txmbuf() was changed to allow passed mbuf chains to free. Caller of gem_load_txmbuf() correctly handles freed mbuf chains. o In gem_start_locked(), added checks for availability of Tx descriptors before trying to load DMA maps which could save CPU cycles when number of available descriptors are low. Also, simplyfy IFF_DRV_OACTIVE detection logic. o Removed hard-coded function names in CTR macros and replaced it with __func__. o Moved statistics counter register access to gem_tick() to reduce number of PCI bus accesses. There is no reason to update statistics counters in interrupt handler. o Removed unnecessary call of gem_start_locked() in gem_ioctl(). Reviewed by: grehan (initial version), marius (with improvements and suggestions) Tested by: grehan (ppc), marius(sparc64)
* Migrate from setting a CARD_OK flag in a shared word, to setting itsimp2007-06-043-32/+21
| | | | | | | | | | | own entry in the softc. This should allow more of cbb_pci_intr() to migrate to a new cbb_pci_filt() so that we don't have to run cbb's ISR in almost every case we get an interrupt. We can't just move cbb_pci_intr into cbb_pci_filt because it does things that aren't safe to do from a fast interrupt handler, err I mean from a filter. This is an important first step. # I wonder if I need to make cardok volatile or not.
OpenPOWER on IntegriCloud