summaryrefslogtreecommitdiffstats
path: root/sys/conf/files.arm
Commit message (Collapse)AuthorAgeFilesLines
* Make arm/disassem.c depends on DDBcognet2005-10-041-3/+3
| | | | make arm/in_cksum.c and arm/in_cksum_asm.S depend on INET.
* MFP4:jkoshy2005-06-091-0/+1
| | | | | | | | | | | | | | | | - Implement sampling modes and logging support in hwpmc(4). - Separate MI and MD parts of hwpmc(4) and allow sharing of PMC implementations across different architectures. Add support for P4 (EMT64) style PMCs to the amd64 code. - New pmcstat(8) options: -E (exit time counts) -W (counts every context switch), -R (print log file). - pmc(3) API changes, improve our ability to keep ABI compatibility in the future. Add more 'alias' names for commonly used events. - bug fixes & documentation.
* We have an asm version of bcmp(), so we could use it as well.cognet2005-04-121-1/+0
|
* Get more love from GEOM on arm.cognet2005-04-071-0/+4
|
* Divorce critical sections from spinlocks. Critical sections as denoted byjhb2005-04-041-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | critical_enter() and critical_exit() are now solely a mechanism for deferring kernel preemptions. They no longer have any affect on interrupts. This means that standalone critical sections are now very cheap as they are simply unlocked integer increments and decrements for the common case. Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter() and spinlock_exit(). This KPI is responsible for providing whatever MD guarantees are needed to ensure that a thread holding a spin lock won't be preempted by any other code that will try to lock the same lock. For now all archs continue to block interrupts in a "spinlock section" as they did formerly in all critical sections. Note that I've also taken this opportunity to push a few things into MD code rather than MI. For example, critical_fork_exit() no longer exists. Instead, MD code ensures that new threads have the correct state when they are created. Also, we no longer try to fixup the idlethreads for APs in MI code. Instead, each arch sets the initial curthread and adjusts the state of the idle thread it borrows in order to perform the initial context switch. This change is largely a big NOP, but the cleaner separation it provides will allow for more efficient alternative locking schemes in other parts of the kernel (bare critical sections rather than per-CPU spin mutexes for per-CPU data for example). Reviewed by: grehan, cognet, arch@, others Tested on: i386, alpha, sparc64, powerpc, arm, possibly more
* Add arm/mem.c.cognet2004-11-221-0/+1
|
* Remove libkern/mem*cognet2004-05-141-2/+0
|
* Remove libkern/bzero.S and libkern/memset.S.cognet2004-05-141-2/+0
|
* Add config magic for arm.cognet2004-05-141-0/+69
OpenPOWER on IntegriCloud