summaryrefslogtreecommitdiffstats
path: root/sys/powerpc
Commit message (Collapse)AuthorAgeFilesLines
* Apply missing s/rv/res/g in previous commit.marcel2007-12-214-4/+4
|
* MFamd64/ia64/i386: Only set the rman bus tags and handles injhb2007-12-204-8/+4
| | | | | | | bus_activate_resource() methods instead of splitting it up between bus_alloc_resource() and bus_activate_resource(). Glanced at by: marcel
* Redefine bus_space_tag_t on PowerPC from a 32-bit integral tomarcel2007-12-199-651/+1146
| | | | | | | | | | | | | | | | | | | | | | a pointer to struct bus_space. The structure contains function pointers that do the actual bus space access. The reason for this change is that previously all bus space accesses were little endian (i.e. had an explicit byte-swap for multi-byte accesses), because all busses on Macs are little endian. The upcoming support for Book E, and in particular the E500 core, requires support for big-endian busses because all embedded peripherals are in the native byte-order. With this change, there's no distinction between I/O port space and memory mapped I/O. PowerPC doesn't have I/O port space. Busses assign tags based on the byte-order only. For that purpose, two global structures exist (bs_be_tag and bs_le_tag), of which the address can be taken to get a valid tag. Obtained from: Juniper, Semihalf
* Rename OEA to AIM. The former means nothing as it applies to allmarcel2007-12-163-7/+3
| | | | | | | | | | | | processors (it's the PowerPC Operating Environment Architecture). AIM designates the processors made by the Apple-IBM-Motorola alliance and those we typically support. While here, remove the NetBSD option IPKDB. It's not an option used by us. Also, PPC_HAVE_FPU is not used by us either. Remove that too. Obtained from: Juniper, Semihalf
* This file was repocopied to src/sys/powerpc/aim, where it willmarcel2007-12-1415-7483/+0
| | | | live on -- an afterlife.
* Forced commit to record that this file was repocopied frommarcel2007-12-142-2/+2
| | | | src/sys/powerpc/powerpc and modified for its new location.
* Remove unused file.marcel2007-12-141-110/+0
|
* Add stubs to unbreak LINT.jkoshy2007-12-071-0/+4
|
* Break out stack(9) from ddb(4):rwatson2007-12-024-38/+135
| | | | | | | | | | | | | | | | | | | | - Introduce per-architecture stack_machdep.c to hold stack_save(9). - Introduce per-architecture machine/stack.h to capture any common definitions required between db_trace.c and stack_machdep.c. - Add new kernel option "options STACK"; we will build in stack(9) if it is defined, or also if "options DDB" is defined to provide compatibility with existing users of stack(9). Add new stack_save_td(9) function, which allows the capture of a stacktrace of another thread rather than the current thread, which the existing stack_save(9) was limited to. It requires that the thread be neither swapped out nor running, which is the responsibility of the consumer to enforce. Update stack(9) man page. Build tested: amd64, arm, i386, ia64, powerpc, sparc64, sun4v Runtime tested: amd64 (rwatson), arm (cognet), i386 (rwatson)
* Define atomic_readandclear_ptr.jasone2007-11-271-0/+1
|
* Implement the _long functions using u_long rather than trying tojb2007-11-261-5/+43
| | | | | | cast as uint32_t which is defined as unsigned int. gcc doesn't want to consider that there might not be much difference between an int and a long on a 32 bit architecture.
* Extend critical section coverage in the low-level interrupt handlers toscottl2007-11-211-1/+1
| | | | | | | | | | | | | | | | | | include the ithread scheduling step. Without this, a preemption might occur in between the interrupt getting masked and the ithread getting scheduled. Since the interrupt handler runs in the context of curthread, the scheudler might see it as having a such a low priority on a busy system that it doesn't get to run for a _long_ time, leaving the interrupt stranded in a disabled state. The only way that the preemption can happen is by a fast/filter handler triggering a schduling event earlier in the handler, so this problem can only happen for cases where an interrupt is being shared by both a fast/filter handler and an ithread handler. Unfortunately, it seems to be common for this sharing to happen with network and USB devices, for example. This fixes many of the mysterious TCP session timeouts and NIC watchdogs that were being reported. Many thanks to Sam Lefler for getting to the bottom of this problem. Reviewed by: jhb, jeff, silby
* Define atomic_cmpset_acq_long and atomic_cmpset_rel_long so thatjb2007-11-191-2/+4
| | | | | they use casts rather than just assuming that the compiler will DTRT without complaining.
* Prevent the leakage of wired pages in the following circumstances:alc2007-11-174-0/+66
| | | | | | | | | | | | | | | | | | | | | | First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated. Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the pages beyond the EOF are unmapped and freed. However, when the file is mlock(2)ed, the pages beyond the EOF are unmapped but not freed because they have a non-zero wire count. This can be a mistake. Specifically, it is a mistake if the sole reason why the pages are wired is because of wired, managed mappings. Previously, unmapping the pages destroys these wired, managed mappings, but does not reduce the pages' wire count. Consequently, when the file is unmapped, the pages are not unwired because the wired mapping has been destroyed. Moreover, when the vm object is finally destroyed, the pages are leaked because they are still wired. The fix is to reduce the pages' wired count by the number of wired, managed mappings destroyed. To do this, I introduce a new pmap function pmap_page_wired_mappings() that returns the number of managed mappings to the given physical page that are wired, and I use this function in vm_object_page_remove(). Reviewed by: tegge MFC after: 6 weeks
* o Rename cpu_thread_setup() to cpu_thread_alloc() to bettermarcel2007-11-142-2/+12
| | | | | | | | | | | | | communicate that it relates to (is called by) thread_alloc() o Add cpu_thread_free() which is called from thread_free() to counter-act cpu_thread_alloc(). i386: Have cpu_thread_free() call cpu_thread_clean() to preserve behaviour. ia64: Have cpu_thread_free() call mtx_destroy() for the mutex initialized in cpu_thread_alloc(). PR: ia64/118024
* A bunch more files that should probably print out a thread namejulian2007-11-142-6/+6
| | | | instead of a process name.
* generally we are interested in what thread did something asjulian2007-11-142-2/+2
| | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
* Split decr_init() into two, with the section that reads the timebasegrehan2007-11-135-8/+27
| | | | | | | | | | | | frequency from OpenFirmware moved out and into a routine that is called from cpu_startup(). This allows correct reporting of the CPU clockspeed when printing out CPU information at boot time. Reported by: numerous Reviewed by: marcel MFC after: 1 day
* Fix for the panic("vm_thread_new: kstack allocation failed") andkib2007-11-053-3/+4
| | | | | | | | | | | | | | | | | | | | silent NULL pointer dereference in the i386 and sparc64 pmap_pinit() when the kmem_alloc_nofault() failed to allocate address space. Both functions now return error instead of panicing or dereferencing NULL. As consequence, vmspace_exec() and vmspace_unshare() returns the errno int. struct vmspace arg was added to vm_forkproc() to avoid dealing with failed allocation when most of the fork1() job is already done. The kernel stack for the thread is now set up in the thread_alloc(), that itself may return NULL. Also, allocation of the first process thread is performed in the fork1() to properly deal with stack allocation failure. proc_linkup() is separated into proc_linkup() called from fork1(), and proc_linkup0(), that is used to set up the kernel process (was known as swapper). In collaboration with: Peter Holm Reviewed by: jhb
* Cut over to ULE on PowerPCgrehan2007-10-236-7/+16
| | | | | | | | | | | | | | | | | kern/sched_ule.c - Add __powerpc__ to the list of supported architectures powerpc/conf/GENERIC - Swap SCHED_4BSD with SCHED_ULE powerpc/powerpc/genassym.c - Export TD_LOCK field of thread struct powerpc/powerpc/swtch.S - Handle new 3rd parameter to cpu_switch() by updating the old thread's lock. Note: uniprocessor-only, will require modification for MP support. powerpc/powerpc/vm_machdep.c - Set 3rd param of cpu_switch to mutex of old thread's lock, making the call a no-op. Reviewed by: marcel, jeffr (slightly older version)
* Make the PCI code aware of PCI domains (aka PCI segments) so we canmarius2007-09-302-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | support machines having multiple independently numbered PCI domains and don't support reenumeration without ambiguity amongst the devices as seen by the OS and represented by PCI location strings. This includes introducing a function pci_find_dbsf(9) which works like pci_find_bsf(9) but additionally takes a domain number argument and limiting pci_find_bsf(9) to only search devices in domain 0 (the only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order to no longer report false positives when searching for siblings and dupe devices in the same domain respectively. Along with this change the sole host-PCI bridge driver converted to actually make use of PCI domain support is uninorth(4), the others continue to use domain 0 only for now and need to be converted as appropriate later on. Note that this means that the format of the location strings as used by pciconf(8) has been changed and that consumers of <sys/pciio.h> potentially need to be recompiled. Suggested by: jhb Reviewed by: grehan, jhb, marcel Approved by: re (kensmith), jhb (PCI maintainer hat)
* o Revert the part of if_gem.c rev. 1.35 which added a call to gem_stop()marius2007-09-261-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to gem_attach() as the former access softc members not yet initialized at that time and gem_reset() actually is enough to stop the chip. [1] o Revise the use of gem_bitwait(); add bus_barrier() calls before calling gem_bitwait() to ensure the respective bit has been written before we starting polling on it and poll for the right bits to change, f.e. even though we only reset RX we have to actually wait for both GEM_RESET_RX and GEM_RESET_TX to clear. Add some additional gem_bitwait() calls in places we've been missing them according to the GEM documentation. Along with this some excessive DELAYs, which probably only were added because of bugs in gem_bitwait() and its use in the first place, as well as as have of an gem_bitwait() reimplementation in gem_reset_tx() were removed. o Add gem_reset_rxdma() and use it to deal with GEM_MAC_RX_OVERFLOW errors more gracefully as unlike gem_init_locked() it resets the RX DMA engine only, causing no link loss and the FIFOs not to be cleared. Also use it deal with GEM_INTR_RX_TAG_ERR errors, with previously were unhandled. This was based on information obtained from the Linux GEM and OpenSolaris ERI drivers. o Turn on workarounds for silicon bugs in the Apple GMAC variants. This was based on information obtained from the Darwin GMAC and Linux GEM drivers. o Turn on "infinite" (i.e. maximum 31 * 64 bytes in length) DMA bursts. This greatly improves especially RX performance. o Optimize the RX path, this consists of: - kicking the receiver as soon as we've a spare descriptor in gem_rint() again instead of just once after all the ready ones have been handled; - kicking the receiver the right way, i.e. as outlined in the GEM documentation in batches of 4 and by pointing it to the descriptor after the last valid one; - calling gem_rint() before gem_tint() in gem_intr() as gem_tint() may take quite a while; - doubling the size of the RX ring to 256 descriptors. Overall the RX performance of a GEM in a 1GHz Sun Fire V210 was improved from ~100Mbit/s to ~850Mbit/s. o In gem_add_rxbuf() don't assign the newly allocated mbuf to rxs_mbuf before calling bus_dmamap_load_mbuf_sg(), if bus_dmamap_load_mbuf_sg() fails we'll free the newly allocated mbuf, unable to recycle the previous one but a NULL pointer dereference instead. o In gem_init_locked() honor the return value of gem_meminit(). o Simplify gem_ringsize() and dont' return garbage in the default case. Based on OpenBSD. o Don't turn on MAC control, MIF and PCS interrupts unless GEM_DEBUG is defined as we don't need/use these interrupts for operation. o In gem_start_locked() sync the DMA maps of the descriptor rings before every kick of the transmitter and not just once after enqueuing all packets as the NIC might instantly start transmitting after we kicked it the first time. o Keep state of the link state and use it to enable or disable the MAC in gem_mii_statchg() accordingly as well as to return early from gem_start_locked() in case the link is down. [3] o Initialize the maximum frame size to a sane value. o In gem_mii_statchg() enable carrier extension if appropriate. o Increment if_ierrors in case of an GEM_MAC_RX_OVERFLOW error and in gem_eint(). [3] o Handle IFF_ALLMULTI correctly; don't set it if we've turned promiscuous group mode on and don't clear the flag if we've disabled promiscuous group mode (these were mostly NOPs though). [2] o Let gem_eint() also report GEM_INTR_PERR errors. o Move setting sc_variant from gem_pci_probe() to gem_pci_attach() as device probe methods are not supposed to touch the softc. o Collapse sc_inited and sc_pci into bits for sc_flags. o Add CTASSERTs ensuring that GEM_NRXDESC and GEM_NTXDESC are set to legal values. o Correctly set up for 802.3x flow control, though #ifdef out the code that actually enables it as this needs more testing and mainly a proper framework to support it. o Correct and add some conversions from hard-coded functions names to __func__ which were borked or forgotten in if_gem.c rev. 1.42. o Use PCIR_BAR instead of a homegrown macro. o Replace sc_enaddr[6] with sc_enaddr[ETHER_ADDR_LEN]. o In gem_pci_attach() in case attaching fails release the resources in the opposite order they were allocated. o Make gem_reset() static to if_gem.c as it's not needed outside that module. o Remove the GEM_GIGABIT flag and the associated code; GEM_GIGABIT was never set and the associated code was in the wrong place. o Remove sc_mif_config; it was only used to cache the contents of the respective register within gem_attach(). o Remove the #ifdef'ed out NetBSD/OpenBSD code for establishing a suspend hook as it will never be used on FreeBSD. o Also probe Apple Intrepid 2 GMAC and Apple Shasta GMAC, add support for Apple K2 GMAC. Based on OpenBSD. o Add support for Sun GBE/P cards, or in other words actually add support for cards based on GEM to gem(4). This mainly consists of adding support for the TBI of these chips. Along with this the PHY selection code was rewritten to hardcode the PHY number for certain configurations as for example the PHY of the on-board ERI of Blade 1000 shows up twice causing no link as the second incarnation is isolated. These changes were ported from OpenBSD with some additional improvements and modulo some bugs. o Add code to if_gem_pci.c allowing to read the MAC-address from the VPD on systems without Open Firmware. This is an improved version of my variant of the respective code in if_hme_pci.c o Now that gem(4) is MI enable it for all archs. Pointed out by: yongari [1] Suggested by: rwatson [2], yongari [3] Tested on: i386 (GEM), powerpc (GMACs by marcel and yongari), sparc64 (ERI and GEM) Reviewed by: yongari Approved by: re (kensmith)
* Use the correct expanded name for SCTP.brueffer2007-09-261-1/+1
| | | | | | | PR: 116496 Submitted by: koitsu Reviewed by: rrs Approved by: re (kensmith)
* Change the management of cached pages (PQ_CACHE) in two fundamentalalc2007-09-251-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ways: (1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate per-object splay tree of cached pages. However, access to this new per-object splay tree is synchronized by the _free_ page queues lock, not to be confused with the heavily contended page queues lock. Consequently, a cached page can be reclaimed by vm_page_alloc(9) without acquiring the object's lock or the page queues lock. This solves a problem independently reported by tegge@ and Isilon. Specifically, they observed the page daemon consuming a great deal of CPU time because of pages bouncing back and forth between the cache queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of this problem turned out to be a deadlock avoidance strategy employed when selecting a cached page to reclaim in vm_page_select_cache(). However, the root cause was really that reclaiming a cached page required the acquisition of an object lock while the page queues lock was already held. Thus, this change addresses the problem at its root, by eliminating the need to acquire the object's lock. Moreover, keeping cached pages in the object's primary splay tree and memq was, in effect, optimizing for the uncommon case. Cached pages are reclaimed far, far more often than they are reactivated. Instead, this change makes reclamation cheaper, especially in terms of synchronization overhead, and reactivation more expensive, because reactivated pages will have to be reentered into the object's primary splay tree and memq. (2) Cached pages are now stored alongside free pages in the physical memory allocator's buddy queues, increasing the likelihood that large allocations of contiguous physical memory (i.e., superpages) will succeed. Finally, as a result of this change long-standing restrictions on when and where a cached page can be reclaimed and returned by vm_page_alloc(9) are eliminated. Specifically, calls to vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and return a formerly cached page. Consequently, a call to malloc(9) specifying M_NOWAIT is less likely to fail. Discussed with: many over the course of the summer, including jeff@, Justin Husted @ Isilon, peter@, tegge@ Tested by: an earlier version by kris@ Approved by: re (kensmith)
* It has been observed on the mailing lists that the different categoriesalc2007-09-152-8/+8
| | | | | | | | | | | | | | | of pages don't sum to anywhere near the total number of pages on amd64. This is for the most part because uma_small_alloc() pages have never been counted as wired pages, like their kmem_malloc() brethren. They should be. This changes fixes that. It is no longer necessary for the page queues lock to be held to free pages allocated by uma_small_alloc(). I removed the acquisition and release of the page queues lock from uma_small_free() on amd64 and ia64 weeks ago. This patch updates the other architectures that have uma_small_alloc() and uma_small_free(). Approved by: re (kensmith)
* Revamp the interrupt handling in support of INTR_FILTER. This includes:marcel2007-08-1115-1098/+495
| | | | | | | | | | | | | | | | | | | | | | | o Revamp the PIC I/F to only abstract the PIC hardware. The resource handling has been moved to nexus, where it belongs. o Include EOI and MASK+EOI methods to the PIC I/F in support of INTR_FILTER. o With the allocation of interrupt resources and setup of interrupt handlers in the common platform code we can delay talking to the PIC hardware after enumeration of all devices. Introduce a call to powerpc_intr_enable() in configure_final() to achieve that and have powerpc_setup_intr() only program the PIC when !cold. o As a consequence of the above, remove all early_attach() glue from the OpenPIC and Heathrow PIC drivers and have them register themselves when they're found during enumeration. o Decouple the interrupt vector from the interrupt request line. Allocate vectors increasingly so that they can be used for the intrcnt index as well. Extend the Heathrow PIC driver to translate between IRQ and vector. The OpenPIC driver already has the support for vectors in hardware. Approved by: re (blanket)
* Re-enable external interrupts for faults, traps and syscalls.marcel2007-08-082-18/+16
| | | | Approved by: re (blanket)
* Eliminate <machine/interruptvar.h> as it has only a singlemarcel2007-08-075-39/+5
| | | | | | | prototype. In the future that prototype will not be needed at all anyway, but for now it's moved to intr_machdep.h. Approved by: re (blanket)
* Remove redundant prototype.marcel2007-08-072-4/+0
| | | | Approved by: re (blanket)
* Add prototype for trap().marcel2007-08-071-0/+7
| | | | Approved by: re (blanket)
* Fix backward compatibility of the "old" (i.e. FreeBSD6) lseekmarcel2007-07-312-4/+4
| | | | | | | | | | | syscall. It was broken when a new lseek syscall was introduced. The problem is that we need to swap the 32-bit td_retval values for the __syscall indirect syscall when the actual syscall has a 32-bit return value. Hence, we need to exclude lseek(2). And this means the "old" lseek(2) as well -- which we didn't. Based on a patch from: grehan@ Approved by: re (rwatson)
* Cast the arguments to atomic_*_ptr() when mapping it to atomic_*_32()marcel2007-07-101-3/+8
| | | | | | This is a minimal fix. Approved by: re (kensmith)
* Reimplement bus_dmamap_load with bus_dmamap_load_buffer.yongari2007-06-221-68/+32
| | | | | | | | | | | | Previously it didn't honor parent dma tag's restrictions such that an invalid dma segment could be passed to device. The driver for the device may panic in sanity check routine for the dma segment or may produce unexpected results. I have no idea how it could ever have worked before. Reviewed by: grehan Tested by: gad Approved by: re (hrs)
* Honor maxsegsz of less than a page size in a DMA tag. Previously ityongari2007-06-221-0/+2
| | | | | | | | | used to return PAGE_SIZE without respect to restrictions of a DMA tag. This affected all of the busdma load functions that use _bus_dmamap_loader_buffer() as their back-end. Reviewed by: scottl (long a ago) Approved by: re (hrs)
* Enable the new physical memory allocator.alc2007-06-161-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allocator uses a binary buddy system with a twist. First and foremost, this allocator is required to support the implementation of superpages. As a side effect, it enables a more robust implementation of contigmalloc(9). Moreover, this reimplementation of contigmalloc(9) eliminates the acquisition of Giant by contigmalloc(..., M_NOWAIT, ...). The twist is that this allocator tries to reduce the number of TLB misses incurred by accesses through a direct map to small, UMA-managed objects and page table pages. Roughly speaking, the physical pages that are allocated for such purposes are clustered together in the physical address space. The performance benefits vary. In the most extreme case, a uniprocessor kernel running on an Opteron, I measured an 18% reduction in system time during a buildworld. This allocator does not implement page coloring. The reason is that superpages have much the same effect. The contiguous physical memory allocation necessary for a superpage is inherently colored. Finally, the one caveat is that this allocator does not effectively support prezeroed pages. I hope this is temporary. On i386, this is a slight pessimization. However, on amd64, the beneficial effects of the direct-map optimization outweigh the ill effects. I speculate that this is true in general of machines with a direct map. Approved by: re
* Enable SCTP by default for GENERIC kernels in order to give itdelphij2007-06-141-0/+1
| | | | | | | | | more exposure. The current state of SCTP implementation is considered to be ready for 32-bit platforms, but still need some work/testing on 64-bit platforms. Approved by: re (kensmith) Discussed with: rrs
* Enable GEOM_PART_MBR by default. On ia64 this replaces GEOM_MBR.marcel2007-06-131-0/+1
|
* Add kdb_cpu_sync_icache(), intended to synchronize instructionmarcel2007-06-097-19/+22
| | | | | | caches with data caches after writing to memory. This typically is required to make breakpoints work on ia64 and powerpc. For those architectures the function is implemented.
* Enable AUDIT by default in the GENERIC kernel, allowing security eventrwatson2007-06-081-0/+1
| | | | | | | | auditing to be turned on without a kernel recompile, just an rc.conf option. Approved by: re (kensmith) Obtained from: TrustedBSD Project
* Sync with other platforms: add kluge to use contigmalloc when themarcel2007-06-081-6/+18
| | | | | alignment is larger than the size and print a diagnostic when we didn't satisfy the alignment.
* Fix the compile. Band-aid until it is worked out how to use the contextgrehan2007-06-062-2/+2
| | | | switch api on ppc.
* - Change comments and asserts to reflect the removal of the globaljeff2007-06-042-4/+4
| | | | | | | | scheduler lock. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Rework the PCPU_* (MD) interface:attilio2007-06-043-5/+6
| | | | | | | | | | | | - Rename PCPU_LAZY_INC into PCPU_INC - Add the PCPU_ADD interface which just does an add on the pcpu member given a specific value. Note that for most architectures PCPU_INC and PCPU_ADD are not safe. This is a point that needs some discussions/work in the next days. Reviewed by: alc, bde Approved by: jeff (mentor)
* Revert VMCNT_* operations introduction.attilio2007-05-312-4/+4
| | | | | | | | Probabilly, a general approach is not the better solution here, so we should solve the sched_lock protection problems separately. Requested by: alc Approved by: jeff (mentor)
* In some particular cases (like in pccard and pccbb), the real devicepiso2007-05-311-2/+11
| | | | | | | | | | | handler is wrapped in a couple of functions - a filter wrapper and an ithread wrapper. In this case (and just in this case), the filter wrapper could ask the system to schedule the ithread and mask the interrupt source if the wrapped handler is composed of just an ithread handler: modify the "old" interrupt code to make it support this situation, while the "new" interrupt code is already ok. Discussed with: jhb
* Eliminate some unused definitions that came from NetBSD.alc2007-05-281-2/+0
|
* Don't initialize the decrementer before initclocks() is called.marcel2007-05-274-24/+14
| | | | | Use cpu_initclocks() for that as it assures that relevant locks have been initialized.
* Eliminate an unused definition.alc2007-05-271-1/+0
|
* Allow FreeBSD's native ELF image activators to execute shared libraries thekan2007-05-221-2/+2
| | | | | | | same way it was enabled for Linux binares in linuxulator. This allows binaries built with -pie. Many ports auto-detect -fPIE support in GCC 4.2 and build binaries FreeBSD was unable to run.
* - define and use VMCNT_{GET,SET,ADD,SUB,PTR} macros for manipulatingjeff2007-05-182-4/+4
| | | | | | | | vmcnts. This can be used to abstract away pcpu details but also changes to use atomics for all counters now. This means sched lock is no longer responsible for protecting counts in the switch routines. Contributed by: Attilio Rao <attilio@FreeBSD.org>
OpenPOWER on IntegriCloud