summaryrefslogtreecommitdiffstats
path: root/sys/sun4v/sun4v
Commit message (Collapse)AuthorAgeFilesLines
* Modularize the Open Firmware client interface to allow run-time switchingnwhitehorn2008-12-204-20/+19
| | | | | | | | | | | | of OFW access semantics, in order to allow future support for real-mode OF access and flattened device frees. OF client interface modules are implemented using KOBJ, in a similar way to the PPC PMAP modules. Because we need Open Firmware to be available before mutexes can be used on sparc64, changes are also included to allow KOBJ to be used very early in the boot process by only using the mutex once we know it has been initialized. Reviewed by: marius, grehan
* - In GCC 4.2 __builtin_frame_address() was fixed to include themarius2008-10-272-25/+13
| | | | | | | | | | V9 stack bias so we no longer need to add it in db_backtrace() and stack_capture() respectively. This also reverts r182018, which kludged around the resulting unaligned access. - Sync the sun4v versions of db_trace.c and stack_machdep.c with the sparc64 ones and fix some style bugs. MFC after: 3 days
* Collect N identical (or near identical) mkdumpheader() implementations intopeter2008-10-011-22/+1
| | | | one, as threatened in the comment. Textdump magic can be passed in.
* MFsparc64: r177642marius2008-09-021-9/+0
| | | | Remove sysbeep() from the non-beeping archs.
* Resurrect clock.c from r164371.marius2008-09-021-0/+67
|
* Integrate the new MPSAFE TTY layer to the FreeBSD operating system.ed2008-08-201-113/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last half year I've been working on a replacement TTY layer for the FreeBSD kernel. The new TTY layer was designed to improve the following: - Improved driver model: The old TTY layer has a driver model that is not abstract enough to make it friendly to use. A good example is the output path, where the device drivers directly access the output buffers. This means that an in-kernel PPP implementation must always convert network buffers into TTY buffers. If a PPP implementation would be built on top of the new TTY layer (still needs a hooks layer, though), it would allow the PPP implementation to directly hand the data to the TTY driver. - Improved hotplugging: With the old TTY layer, it isn't entirely safe to destroy TTY's from the system. This implementation has a two-step destructing design, where the driver first abandons the TTY. After all threads have left the TTY, the TTY layer calls a routine in the driver, which can be used to free resources (unit numbers, etc). The pts(4) driver also implements this feature, which means posix_openpt() will now return PTY's that are created on the fly. - Improved performance: One of the major improvements is the per-TTY mutex, which is expected to improve scalability when compared to the old Giant locking. Another change is the unbuffered copying to userspace, which is both used on TTY device nodes and PTY masters. Upgrading should be quite straightforward. Unlike previous versions, existing kernel configuration files do not need to be changed, except when they reference device drivers that are listed in UPDATING. Obtained from: //depot/projects/mpsafetty/... Approved by: philip (ex-mentor) Discussed: on the lists, at BSDCan, at the DevSummit Sponsored by: Snow B.V., the Netherlands dcons(4) fixed by: kan
* Commit step 1 of the vimage project, (network stack)bz2008-08-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | virtualization work done by Marko Zec (zec@). This is the first in a series of commits over the course of the next few weeks. Mark all uses of global variables to be virtualized with a V_ prefix. Use macros to map them back to their global names for now, so this is a NOP change only. We hope to have caught at least 85-90% of what is needed so we do not invalidate a lot of outstanding patches again. Obtained from: //depot/projects/vimage-commit2/... Reviewed by: brooks, des, ed, mav, julian, jamie, kris, rwatson, zec, ... (various people I forgot, different versions) md5 (with a bit of help) Sponsored by: NLnet Foundation, The FreeBSD Foundation X-MFC after: never V_Commit_Message_Reviewed_By: more people than the patch
* Retire pmap_addr_hint(). It is no longer used.alc2008-05-181-6/+0
|
* Add a stub for pmap_align_superpage() on machines that don't (yet)alc2008-05-091-0/+10
| | | | implement pmap-level support for superpages.
* Expand kdb_alt_break a little, most commonly used with the optionpeter2008-05-041-2/+16
| | | | | | | | | | | | | | | | | | | ALT_BREAK_TO_DEBUGGER. In addition to "Enter ~ ctrl-B" (to enter the debugger), there is now "Enter ~ ctrl-P" (force panic) and "Enter ~ ctrl-R" (request clean reboot, ala ctrl-alt-del on syscons). We've used variations of this at work. The force panic sequence is best used with KDB_UNATTENDED for when you just want it to dump and get on with it. The reboot request is a safer way of getting into single user than a power cycle. eg: you've hosed the ability to log in (pam, rtld, etc). It gives init the reboot signal, which causes an orderly reboot. I've taken my best guess at what the !x86 and non-sio code changes should be. This also makes sio release its spinlock before calling KDB/DDB.
* Remove an header which is unused for sun4v.marius2008-05-022-2/+0
| | | | MFC after: 3 days
* Remove the MD isa_irq_pending() and the underlying PCI-specificmarius2008-04-261-11/+0
| | | | | | | | | infrastructure. Its only consumer ever was sio(4) and thus was unused on sparc64 since removing the last traces of sio(4) in sparc64 configuration files in favor for uart(4) over three years ago. If similar functionality is required again it should be brought back as an MD intr_pending() which works for all busses by using for example interrupt controller hooks.
* - Add an integer argument to idle to indicate how likely we are to wakejeff2008-04-251-1/+8
| | | | | | | | | | | | | | | from idle over the next tick. - Add a new MD routine, cpu_wake_idle() to wakeup idle threads who are suspended in cpu specific states. This function can fail and cause the scheduler to fall back to another mechanism (ipi). - Implement support for mwait in cpu_idle() on i386/amd64 machines that support it. mwait is a higher performance way to synchronize cpus as compared to hlt & ipis. - Allow selecting the idle routine by name via sysctl machdep.idle. This replaces machdep.cpu_idle_hlt. Only idle routines supported by the current machine are permitted. Sponsored by: Nokia
* - Add the interrupt vector number to intr_event_create so MI code canjeff2008-04-111-2/+2
| | | | | | | | | | | | lookup hard interrupt events by number. Ignore the irq# for soft intrs. - Add support to cpuset for binding hardware interrupts. This has the side effect of binding any ithread associated with the hard interrupt. As per restrictions imposed by MD code we can only bind interrupts to a single cpu presently. Interrupts can be 'unbound' by binding them to all cpus. Reviewed by: jhb Sponsored by: Nokia
* Add a MI intr_event_handle() routine for the non-INTR_FILTER case. Thisjhb2008-04-051-79/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | allows all the INTR_FILTER #ifdef's to be removed from the MD interrupt code. - Rename the intr_event 'eoi', 'disable', and 'enable' hooks to 'post_filter', 'pre_ithread', and 'post_ithread' to be less x86-centric. Also, add a comment describe what the MI code expects them to do. - On amd64, i386, and powerpc this is effectively a NOP. - On arm, don't bother masking the interrupt unless the ithread is scheduled in the non-INTR_FILTER case to match what INTR_FILTER did. Also, don't bother unmasking the interrupt in the post_filter case if we never masked it. The INTR_FILTER case had been doing this by having arm_unmask_irq for the post_filter (formerly 'eoi') hook. - On ia64, stray interrupts are now masked for the non-INTR_FILTER case. They were already masked in the INTR_FILTER case. - On sparc64, use the a NULL pre_ithread hook and use intr_enable_eoi() for both the 'post_filter' and 'post_ithread' hooks to match what the non-INTR_FILTER code did. - On sun4v, retire the ithread wrapper hack by using an appropriate 'post_ithread' hook instead (it's what 'post_ithread'/'enable' was designed to do even in 5.x). Glanced at by: piso Reviewed by: marius Requested by: marius [1], [5] Tested on: amd64, i386, arm, sparc64
* Catch up to intr_event_create() prototype change.jhb2008-03-181-1/+1
| | | | Pointy hat: jhb
* Add preliminary support for binding interrupts to CPUs:jhb2008-03-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add a new intr_event method ie_assign_cpu() that is invoked when the MI code wishes to bind an interrupt source to an individual CPU. The MD code may reject the binding with an error. If an assign_cpu function is not provided, then the kernel assumes the platform does not support binding interrupts to CPUs and fails all requests to do so. - Bind ithreads to CPUs on their next execution loop once an interrupt event is bound to a CPU. Only shared ithreads are bound. We currently leave private ithreads for drivers using filters + ithreads in the INTR_FILTER case unbound. - A new intr_event_bind() routine is used to bind an interrupt event to a CPU. - Implement binding on amd64 and i386 by way of the existing pic_assign_cpu PIC method. - For x86, provide a 'intr_bind(IRQ, cpu)' wrapper routine that looks up an interrupt source and binds its interrupt event to the specified CPU. MI code can currently (ab)use this by doing: intr_bind(rman_get_start(irq_res), cpu); however, I plan to add a truly MI interface (probably a bus_bind_intr(9)) where the implementation in the x86 nexus(4) driver would end up calling intr_bind() internally. Requested by: kmacy, gallatin, jeff Tested on: {amd64, i386} x {regular, INTR_FILTER}
* - Rather than repeating the same preemption code everywhere call the schedulerjeff2008-03-101-8/+1
| | | | specific sched_preempt() routine.
* - Remove the old smp cpu topology specification with a new, more flexiblejeff2008-03-021-0/+7
| | | | | | | | | | | | | | | | | tree structure that encodes the level of cache sharing and other properties. - Provide several convenience functions for creating one and two level cpu trees as well as a default flat topology. The system now always has some topology. - On i386 and amd64 create a seperate level in the hierarchy for HTT and multi-core cpus. This will allow the scheduler to intelligently load balance non-uniform cores. Presently we don't detect what level of the cache hierarchy is shared at each level in the topology. - Add a mechanism for testing common topologies that have more information than the MD code is able to provide via the kern.smp.topology tunable. This should be considered a debugging tool only and not a stable api. Sponsored by: Nokia
* Add a wrapper function that bound checks writes to the dump device.ru2008-01-281-6/+6
|
* Add an access type parameter to pmap_enter(). It will be used to implementalc2008-01-031-2/+2
| | | | | | | superpage promotion. Correct a style error in kmem_malloc(): pmap_enter()'s last parameter is a Boolean.
* Add a new 'why' argument to kdb_enter(), and a set of constants to userwatson2007-12-253-6/+6
| | | | | | | | | for that argument. This will allow DDB to detect the broad category of reason why the debugger has been entered, which it can use for the purposes of deciding which DDB script to run. Assign approximate why values to all current consumers of the kdb_enter() interface.
* Break out stack(9) from ddb(4):rwatson2007-12-022-23/+86
| | | | | | | | | | | | | | | | | | | | - Introduce per-architecture stack_machdep.c to hold stack_save(9). - Introduce per-architecture machine/stack.h to capture any common definitions required between db_trace.c and stack_machdep.c. - Add new kernel option "options STACK"; we will build in stack(9) if it is defined, or also if "options DDB" is defined to provide compatibility with existing users of stack(9). Add new stack_save_td(9) function, which allows the capture of a stacktrace of another thread rather than the current thread, which the existing stack_save(9) was limited to. It requires that the thread be neither swapped out nor running, which is the responsibility of the consumer to enforce. Update stack(9) man page. Build tested: amd64, arm, i386, ia64, powerpc, sparc64, sun4v Runtime tested: amd64 (rwatson), arm (cognet), i386 (rwatson)
* Prevent the leakage of wired pages in the following circumstances:alc2007-11-171-0/+28
| | | | | | | | | | | | | | | | | | | | | | First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated. Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the pages beyond the EOF are unmapped and freed. However, when the file is mlock(2)ed, the pages beyond the EOF are unmapped but not freed because they have a non-zero wire count. This can be a mistake. Specifically, it is a mistake if the sole reason why the pages are wired is because of wired, managed mappings. Previously, unmapping the pages destroys these wired, managed mappings, but does not reduce the pages' wire count. Consequently, when the file is unmapped, the pages are not unwired because the wired mapping has been destroyed. Moreover, when the vm object is finally destroyed, the pages are leaked because they are still wired. The fix is to reduce the pages' wired count by the number of wired, managed mappings destroyed. To do this, I introduce a new pmap function pmap_page_wired_mappings() that returns the number of managed mappings to the given physical page that are wired, and I use this function in vm_object_page_remove(). Reviewed by: tegge MFC after: 6 weeks
* o Rename cpu_thread_setup() to cpu_thread_alloc() to bettermarcel2007-11-141-1/+6
| | | | | | | | | | | | | communicate that it relates to (is called by) thread_alloc() o Add cpu_thread_free() which is called from thread_free() to counter-act cpu_thread_alloc(). i386: Have cpu_thread_free() call cpu_thread_clean() to preserve behaviour. ia64: Have cpu_thread_free() call mtx_destroy() for the mutex initialized in cpu_thread_alloc(). PR: ia64/118024
* generally we are interested in what thread did something asjulian2007-11-141-1/+1
| | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
* Fix for the panic("vm_thread_new: kstack allocation failed") andkib2007-11-052-2/+3
| | | | | | | | | | | | | | | | | | | | silent NULL pointer dereference in the i386 and sparc64 pmap_pinit() when the kmem_alloc_nofault() failed to allocate address space. Both functions now return error instead of panicing or dereferencing NULL. As consequence, vmspace_exec() and vmspace_unshare() returns the errno int. struct vmspace arg was added to vm_forkproc() to avoid dealing with failed allocation when most of the fork1() job is already done. The kernel stack for the thread is now set up in the thread_alloc(), that itself may return NULL. Also, allocation of the first process thread is performed in the fork1() to properly deal with stack allocation failure. proc_linkup() is separated into proc_linkup() called from fork1(), and proc_linkup0(), that is used to set up the kernel process (was known as swapper). In collaboration with: Peter Holm Reviewed by: jhb
* Rename the kthread_xxx (e.g. kthread_create()) callsjulian2007-10-201-1/+1
| | | | | | | | | | | to kproc_xxx as they actually make whole processes. Thos makes way for us to add REAL kthread_create() and friends that actually make theads. it turns out that most of these calls actually end up being moved back to the thread version when it's added. but we need to make this cosmetic change first. I'd LOVE to do this rename in 7.0 so that we can eventually MFC the new kthread_xxx() calls.
* Make the PCI code aware of PCI domains (aka PCI segments) so we canmarius2007-09-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | support machines having multiple independently numbered PCI domains and don't support reenumeration without ambiguity amongst the devices as seen by the OS and represented by PCI location strings. This includes introducing a function pci_find_dbsf(9) which works like pci_find_bsf(9) but additionally takes a domain number argument and limiting pci_find_bsf(9) to only search devices in domain 0 (the only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order to no longer report false positives when searching for siblings and dupe devices in the same domain respectively. Along with this change the sole host-PCI bridge driver converted to actually make use of PCI domain support is uninorth(4), the others continue to use domain 0 only for now and need to be converted as appropriate later on. Note that this means that the format of the location strings as used by pciconf(8) has been changed and that consumers of <sys/pciio.h> potentially need to be recompiled. Suggested by: jhb Reviewed by: grehan, jhb, marcel Approved by: re (kensmith), jhb (PCI maintainer hat)
* It has been observed on the mailing lists that the different categoriesalc2007-09-151-4/+4
| | | | | | | | | | | | | | | of pages don't sum to anywhere near the total number of pages on amd64. This is for the most part because uma_small_alloc() pages have never been counted as wired pages, like their kmem_malloc() brethren. They should be. This changes fixes that. It is no longer necessary for the page queues lock to be held to free pages allocated by uma_small_alloc(). I removed the acquisition and release of the page queues lock from uma_small_free() on amd64 and ia64 weeks ago. This patch updates the other architectures that have uma_small_alloc() and uma_small_free(). Approved by: re (kensmith)
* Fix warning - add missing #includepeter2007-07-061-0/+1
| | | | | Submitted by: mjacob Approved by: re (rwatson)
* - Restore the machine independency of sys/dev/ofw/openfirm.{c,h} bymarius2007-06-162-4/+30
| | | | | | | moving OF_set_mmfsa_traptable() (SUNW,set-trap-table with the two arguments used here is specific to sun4v) to MD code. - In sys/dev/ofw/openfirm.h remove prototypes for unimplemented functions and unused Solaris compatibility macros.
* Enable the new physical memory allocator.alc2007-06-161-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allocator uses a binary buddy system with a twist. First and foremost, this allocator is required to support the implementation of superpages. As a side effect, it enables a more robust implementation of contigmalloc(9). Moreover, this reimplementation of contigmalloc(9) eliminates the acquisition of Giant by contigmalloc(..., M_NOWAIT, ...). The twist is that this allocator tries to reduce the number of TLB misses incurred by accesses through a direct map to small, UMA-managed objects and page table pages. Roughly speaking, the physical pages that are allocated for such purposes are clustered together in the physical address space. The performance benefits vary. In the most extreme case, a uniprocessor kernel running on an Opteron, I measured an 18% reduction in system time during a buildworld. This allocator does not implement page coloring. The reason is that superpages have much the same effect. The contiguous physical memory allocation necessary for a superpage is inherently colored. Finally, the one caveat is that this allocator does not effectively support prezeroed pages. I hope this is temporary. On i386, this is a slight pessimization. However, on amd64, the beneficial effects of the direct-map optimization outweigh the ill effects. I speculate that this is true in general of machines with a direct map. Approved by: re
* - Change comments and asserts to reflect the removal of the globaljeff2007-06-042-3/+2
| | | | | | | | scheduler lock. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Commit 10/14 of sched_lock decomposition.jeff2007-06-041-10/+4
| | | | | | | | | | | | - Use sched_throw() rather than replicating the same cpu_throw() code for each architecture. This also allows the scheduler to use any locking it may want to. - Use the thread_lock() rather than sched_lock when preempting. - The scheduler lock is not required to synchronize release_aps. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Rework the PCPU_* (MD) interface:attilio2007-06-041-2/+2
| | | | | | | | | | | | - Rename PCPU_LAZY_INC into PCPU_INC - Add the PCPU_ADD interface which just does an add on the pcpu member given a specific value. Note that for most architectures PCPU_INC and PCPU_ADD are not safe. This is a point that needs some discussions/work in the next days. Reviewed by: alc, bde Approved by: jeff (mentor)
* Revert VMCNT_* operations introduction.attilio2007-05-314-7/+7
| | | | | | | | Probabilly, a general approach is not the better solution here, so we should solve the sched_lock protection problems separately. Requested by: alc Approved by: jeff (mentor)
* In some particular cases (like in pccard and pccbb), the real devicepiso2007-05-311-2/+11
| | | | | | | | | | | handler is wrapped in a couple of functions - a filter wrapper and an ithread wrapper. In this case (and just in this case), the filter wrapper could ask the system to schedule the ithread and mask the interrupt source if the wrapped handler is composed of just an ithread handler: modify the "old" interrupt code to make it support this situation, while the "new" interrupt code is already ok. Discussed with: jhb
* Honor maxsegsz of less than a page size in a DMA tag. Previously ityongari2007-05-291-0/+2
| | | | | | | | used to return PAGE_SIZE without respect to restrictions of a DMA tag. This affected all of the busdma load functions that use _bus_dmamap_loader_buffer() as their back-end. Reviewed by: scottl
* remove unneccessary curcpu reference in setting mmfsakmacy2007-05-251-3/+1
|
* move trap table initialization for cpu0 into sparc64_initkmacy2007-05-252-17/+36
|
* Add some early diagnostics under bootverbosekmacy2007-05-232-1/+28
| | | | bootverbose is not getting set early enough so hardcode for the moment
* restore interrupts to working order after INTR_THREAD changeskmacy2007-05-221-19/+21
| | | | | - ithread_wrapper was being treated as a wrapper for fast interrupts when in fact it was intended for ithread interrupts
* - rename VMCNT_DEC to VMCNT_SUB to reflect the count argument.jeff2007-05-203-3/+3
| | | | | Suggested by: julian@ Contributed by: attilio@
* Delete the unused/not really used sparc64 (as in sun4u) cache.h,marius2007-05-205-10/+0
| | | | | | iommureg.h (which already began to bitrot) and iommuvar.h from the sun4v source and adjust some of the source which is shared between sparc64 and sun4v as appropriate.
* Remove superfluous inclusion of machine/ver.h.marius2007-05-204-4/+0
|
* Make previous revision compile.marius2007-05-201-1/+1
|
* - define and use VMCNT_{GET,SET,ADD,SUB,PTR} macros for manipulatingjeff2007-05-184-7/+7
| | | | | | | | vmcnts. This can be used to abstract away pcpu details but also changes to use atomics for all counters now. This means sched lock is no longer responsible for protecting counts in the switch routines. Contributed by: Attilio Rao <attilio@FreeBSD.org>
* o break newbus api: add a new argument of type driver_filter_t topiso2007-02-234-19/+21
| | | | | | | | | | | | | bus_setup_intr() o add an int return code to all fast handlers o retire INTR_FAST/IH_FAST For more info: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=465712+0+current/freebsd-current Reviewed by: many Approved by: re@
* Add support for IPI_PREEMPT in order to enable use of the ULE schedulerkmacy2007-02-022-1/+17
|
OpenPOWER on IntegriCloud