summaryrefslogtreecommitdiffstats
path: root/sys/ia64
Commit message (Collapse)AuthorAgeFilesLines
* - Remove the eintrcnt/eintrnames usage and introduce the concept ofattilio2011-07-181-2/+5
| | | | | | | | | | | | | | | | sintrcnt/sintrnames which are symbols containing the size of the 2 tables. - For amd64/i386 remove the storage of intr* stuff from assembly files. This area can be widely improved by applying the same to other architectures and likely finding an unified approach among them and move the whole code to be MI. More work in this area is expected to happen fairly soon. No MFC is previewed for this patch. Tested by: pluknet Reviewed by: jhb Approved by: re (kib)
* Enable NEW_PCIB by default on ia64.jhb2011-07-181-0/+2
| | | | Approved by: re (kib), marcel
* Implement bus_adjust_resource() for the ia64 nexus driver.jhb2011-07-181-17/+34
| | | | | Reviewed by: marcel Approved by: re (kib)
* Don't assume pmap_mapdev() gets called only for memory mapped I/Omarcel2011-07-161-3/+29
| | | | | | | | | | | | addresses (i.e. uncacheable). ACPI in particular uses pmap_mapdev() rather excessively (number of calls) just to get a valid KVA. To that end, have pmap_mapdev(): 1. cache the last result so that we don't waste time for multiple consecutive invocations with the same PA/SZ. 2. find the memory descriptor that covers the PA and return NULL if none was found or when the PA is for a common DRAM address. 3. Use either a region 6 or region 7 KVA, in accordance with the memory attribute.
* Don't send EOI to the CPU before we handled the interrupt. This couldmarcel2011-07-162-4/+7
| | | | | | | | | | | | potentially trigger multiple pending interrupts for level-sensitive interrupts. However, the event timer interrupt does need EOI before being handled to avoid missing clock events. These conflicting requirements are handled by having the XIV handler inform the dispatch code whether or not it send EOI to the CPU. If not, the dispatch code will do it. This allows handlers to send EOI before doing potentially long-running activities, while still have a sensible default behaviour.
* Add a few more helper functions for working with memory descriptors:marcel2011-07-162-4/+54
| | | | | | o efi_md_find() - returns the md that covers the given address o efi_md_last() - returns the last md in the list o efi_md_prev() - returns the md that preceeds the given md.
* Implement basic support for memory attributes. At this time we onlymarcel2011-07-083-27/+121
| | | | | | | | | | | | | distinguish between UC and WB memory so that we can map the page to either a region 6 address (for UC) or a region 7 address (for WB). This change is only now possible, because previously we would map regions 6 and 7 with 256MB translations and on top of that had the kernel mapped in region 7 using a wired translation. The introduction of the PBVM moved the kernel into its own region and freed up region 7 and allowed us to revert to standard page-sized translations. This commit inroduces pmap_page_to_va() that respects the attribute.
* Disable PREEMPTION for now. See also PR ia64/147501.marcel2011-07-041-1/+1
|
* With retirement of cpumask_t and usage of cpuset_t for representing aattilio2011-07-041-9/+7
| | | | | | | | | | | | | | | mask of CPUs, pc_other_cpus and pc_cpumask become highly inefficient. Remove them and replace their usage with custom pc_cpuid magic (as, atm, pc_cpumask can be easilly represented by (1 << pc_cpuid) and pc_other_cpus by (all_cpus & ~(1 << pc_cpuid))). This change is not targeted for MFC because of struct pcpu members removal and dependency by cpumask_t retirement. MD review by: marcel, marius, alc Tested by: pluknet MD testing by: marcel, marius, gonzo, andreast
* When iterating over a paging queue, explicitly check for PG_MARKER, insteadalc2011-07-021-1/+1
| | | | | | of relying on zeroed memory being interpreted as an empty PV list. Reviewed by: kib
* Change the management of nested faults by switching to physicalmarcel2011-06-302-133/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | addressing while reading or writing the trap frame. It's not possible to guarantee that the one translation cache entry that we depend on is not going to get purged by the CPU. We already know that global shootdowns (ptc.g and/or ptc.ga) can (and will) cause multiple TC entries to get purged and we initialize tried to handle that by serializing kernel entry with these operations. However, we need to serialize kernel exit as well. But even if we can serialize, it appears that CPU threads within a core can affect each other's TC entries beyond the global shootdown. This would mean serializing any and all translatation cache updates with the threads in a core with the kernel entry and exit of any thread in that core. This is just too painful and complicated. Since we already properly coded for the 2 nested faults that we can get, all we need to do is use those to obtain the physical address of the trap frame, switch to physical mode and in that way eliminate any further faults. The trap frame is already aligned to 1KB boundaries to make sure we don't cross the page boundary, this is safe to do. We still need to serialize ptc.g or ptc.ga across CPUs because the platform can only have 1 such operation outstanding at the same time. We can now use a regular (spin) lock for this. Also, it has been observed that we can get a nested TLB faults for region 7 virtual addresses. This was unexpected. For now, we enhance the nested TLB fault handler to deal with those as well, but it needs to be understood.
* Add a new option, OBJPR_NOTMAPPED, to vm_object_page_remove(). Passing thisalc2011-06-291-2/+2
| | | | | | | | | | | | | | | | | | option to vm_object_page_remove() asserts that the specified range of pages is not mapped, or more precisely that none of these pages have any managed mappings. Thus, vm_object_page_remove() need not call pmap_remove_all() on the pages. This change not only saves time by eliminating pointless calls to pmap_remove_all(), but it also eliminates an inconsistency in the use of pmap_remove_all() versus related functions, like pmap_remove_write(). It eliminates harmless but pointless calls to pmap_remove_all() that were being performed on PG_UNMANAGED pages. Update all of the existing assertions on pmap_remove_all() to reflect this change. Reviewed by: kib
* Oops. The sec field of struct bintime is *not* a 32-bit type.marcel2011-06-251-1/+1
| | | | It's time_t, which is 64 bits on ia64.
* Define the minimum fractional period in terms of hz. We know hz ismarcel2011-06-251-2/+2
| | | | | | | | | a magnitude smaller than itc_freq. A minimum period of 10*hz is sufficient precision. As a side-effect, the number of clocks per second, when the machine is idle, dropped by more than 50%. Be anal and define the maximum period to be at least 4G seconds. With a 64-bit counter and an ITC frequency that's expected to be always less than 4Ghz, it takes longer than that to wrap around.
* Replace the original copyright notice with my own. Everything inmarcel2011-06-251-30/+22
| | | | | this file is written by me and has no bearing on the initial or original version.
* Update copyright.marcel2011-06-251-1/+1
|
* Switch to the event timers infrastructure. This includes:marcel2011-06-257-106/+136
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Setting td_intr_frame to the XIVs trap frame because it's referenced by the ET event handler. o Signal EOI to the CPU before calling the registered XIV handlers. This prevents lost ITC interrupts, which cause starvation in one-shot mode. o Adding support for IPI_HARDCLOCK with corresponding per-CPU counters. o Have the APs call cpu_initclocks() so as to limited the scattering of clock related initialization. cpu_initclocks() calls the <self>_bsp() or <self>_ap() version accordingly. o Uncomment the ET clock handling in cpu_idle(). o Update the DDB 'show pcpu' output for the new MD fields. o Entirely rewritten ia64_ih_clock(). Note that we don't create as many clock XIVs as we have CPUs, as is done on PowerPC. It doesn't scale. We can only have 240 XIVs and we can have more CPUs than that. There's a single intrcnt index for the cumulative clock ticks and we keep per CPU counts in the PCPU stats structure. o Register the ITC by hooking SI_SUB_CONFIGURE (2nd order). Open issues: o Clock interrupts can still be lost. Some tweaking is still necessary. Thanks to: mav@ for his support, feedback and explanations. ET stats while committing: eris% sysctl machdep.cpu | grep nclks machdep.cpu.0.nclks: 24007 machdep.cpu.1.nclks: 22895 machdep.cpu.2.nclks: 13523 machdep.cpu.3.nclks: 9342 machdep.cpu.4.nclks: 9103 machdep.cpu.5.nclks: 9298 machdep.cpu.6.nclks: 10039 machdep.cpu.7.nclks: 9479 eris% vmstat -i | grep clock clock 108599 50
* Unblock the outgoing thread after we performed pmap_switch() tomarcel2011-06-231-2/+2
| | | | | | | | | switch the region registers. pmap_switch() returns the pmap for which the region register are currently programmed, which needs to be re-programmed on the CPU the ougoing thread gets switched in. This change does not noticibly change anything or fix known bugs, but does give me a warm fuzzy feeling by being more correct.
* Use a non-standard page size that is supported.alc2011-06-211-1/+1
|
* Improve on style(9)marcel2011-06-171-94/+73
|
* Properly serialize the global shootdown with the instructionmarcel2011-06-172-2/+15
| | | | | | | stream of the local processor. Also explicitly invalidate the ALAT. This is done on the other CPUs in the coherence domain by virtue of the ptc.ga instruction, but does not apply to the local CPU.
* Add the model number for the Montvale processor (marketed as Itanium 2 9100).marcel2011-06-111-0/+3
| | | | At this time we're missing just one: Tukwila (Itanium 2 9300).
* MFCattilio2011-06-071-0/+2
|\
| * Call set_cputicker() to have the time counter use the ITC register.marcel2011-06-071-0/+2
| | | | | | | | Note that the ITC frequency is fixed.
* | MFCattilio2011-06-063-18/+47
|\ \ | |/
| * Improve cpu_idle():marcel2011-06-063-18/+47
| | | | | | | | | | | | | | | | | | | | o cpu_idle_hook is expected to be called with interrupts disabled and re-enables interrupts on return. o sync with x86: don't idle when the CPU has runnable tasks o have callers of ia64_call_pal_static() disable interrupts and re-enable interrupts. o add, but compile-out, support for idle mode. This will be enabled at some later time, after proper testing.
| * On multi-core, multi-threaded PPC systems, it is important that the threadsnwhitehorn2011-05-313-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | be brought up in the order they are enumerated in the device tree (in particular, that thread 0 on each core be brought up first). The SLIST through which we loop to start the CPUs has all of its entries added with SLIST_INSERT_HEAD(), which means it is in reverse order of enumeration and so AP startup would always fail in such situations (causing a machine check or RTAS failure). Fix this by changing the SLIST into an STAILQ, and inserting new CPUs at the end. Reviewed by: jhb
* | MFCattilio2011-05-313-6/+6
| |
* | MFCattilio2011-05-144-23/+25
|\ \ | |/
| * Prefer switching the memory stack from user to kernel *before* switchingmarcel2011-05-141-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | the register stack. While the ordering doesn't matter, it creates an invariant not previously there: the memory stack pointer will always be larger than the register stack pointer. With this invariant in place, it's easier to add instrumentation code that detects a stack overflow because in such a scenario the memory stack pointer and register stack pointers have crossed each other. Aside: basic kernel operation needs about half the stack size (~16K) at most. We have plenty of head room on the kernel stack...
| * Sharpening the saw:marcel2011-05-141-8/+12
| | | | | | | | | | | | | | | | | | | | o Clobber the register that holds the restart token immediately after crossing the restart point. This prevents false positives (i.e. a nested exception that we don't know can happen and that is being treated as one we know by virtue of a lingering restart token). o Now that the bootstrap kernel stack is free, switch onto it and call trap() for nested traps that we don't know about. In trap we panic() so that we can analyze the condition.
| * Be pedantic: mark the pcpu pointer (= register r13) itself as volatile.marcel2011-05-141-1/+1
| |
| * Turn ia64_srlz() and ia64_srlz_i() into defines so that the code ismarcel2011-05-141-11/+8
| | | | | | | | still correct when inlining is disabled.
| * Move the ZERO_REGION_SIZE to a machine-dependent file, as on manymdf2011-05-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | architectures (i386, for example) the virtual memory space may be constrained enough that 2MB is a large chunk. Use 64K for arches other than amd64 and ia64, with special handling for sparc64 due to differing hardware. Also commit the comment changes to kmem_init_zero_region() that I missed due to not saving the file. (Darn the unfamiliar development environment). Arch maintainers, please feel free to adjust ZERO_REGION_SIZE as you see fit. Requested by: alc MFC after: 1 week MFC with: r221853
* | MFCattilio2011-05-131-0/+2
| |
* | Fix remaining bits that actually weren't converted by mistake.attilio2011-05-131-3/+4
| | | | | | | | Reported by: sbruno
* | MFCattilio2011-05-071-0/+22
|\ \ | |/
| * In pmap_kextract(), return the physical address for PBVM virtualmarcel2011-05-071-0/+22
| | | | | | | | addresses as well (incl. the PBVM page table).
| * Retire isa_setup_intr() and isa_teardown_intr() and use the generic busjhb2011-05-061-22/+0
| | | | | | | | | | | | versions instead. They were never needed as bus_generic_intr() and bus_teardown_intr() had been changed to pass the original child device up in 42734, but the ISA bus was not converted to new-bus until 45720.
* | MFCattilio2011-05-061-22/+0
| |
* | Commit the support for removing cpumask_t and replacing it directly withattilio2011-05-053-9/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpuset_t objects. That is going to offer the underlying support for a simple bump of MAXCPU and then support for number of cpus > 32 (as it is today). Right now, cpumask_t is an int, 32 bits on all our supported architecture. cpumask_t on the other side is implemented as an array of longs, and easilly extendible by definition. The architectures touched by this commit are the following: - amd64 - i386 - pc98 - arm - ia64 - XEN while the others are still missing. Userland is believed to be fully converted with the changes contained here. Some technical notes: - This commit may be considered an ABI nop for all the architectures different from amd64 and ia64 (and sparc64 in the future) - per-cpu members, which are now converted to cpuset_t, needs to be accessed avoiding migration, because the size of cpuset_t should be considered unknown - size of cpuset_t objects is different from kernel and userland (this is primirally done in order to leave some more space in userland to cope with KBI extensions). If you need to access kernel cpuset_t from the userland please refer to example in this patch on how to do that correctly (kgdb may be a good source, for example). - Support for other architectures is going to be added soon - Only MAXCPU for amd64 is bumped now The patch has been tested by sbruno and Nicholas Esborn on opteron 4 x 12 pack CPUs. More testing on big SMP is expected to came soon. pluknet tested the patch with his 8-ways on both amd64 and i386. Tested by: pluknet, sbruno, gianni, Nicholas Esborn Reviewed by: jeff, jhb, sbruno
* Don't use the whole region 5 for KVA, because the CPU may not implement allmarcel2011-05-021-1/+2
| | | | | | | of the 61 bits available within the region for virtual addressing. Since there's no good way for us to map out the gap in the virtual address space, limit KVA to the architectural minimum implemented address bits. This still gives us 1 petabyte of KVA, so no worries.
* Stop linking against a direct-mapped virtual address and insteadmarcel2011-04-3012-304/+520
| | | | | | | | | | | use the PBVM. This eliminates the implied hardcoding of the physical address at which the kernel needs to be loaded. Using the PBVM makes it possible to load the kernel irrespective of the physical memory organization and allows us to replicate kernel text on NUMA machines. While here, reduce the direct-mapped page size to the kernel's page size so that we can support memory attributes better.
* Change rman_manage_region() to actually honor the rm_start and rm_endjhb2011-04-291-1/+1
| | | | | | | | | | constraints on the rman and reject attempts to manage a region that is out of range. - Fix various places that set rm_end incorrectly (to ~0 or ~0u instead of ~0ul). - To preserve existing behavior, change rman_init() to set rm_start and rm_end to allow managing the full range (0 to ~0ul) if they are not set by the caller when rman_init() is called.
* Add the watchdogs patting during the (shutdown time) disk syncing andattilio2011-04-281-0/+8
| | | | | | | | | | | | | | | | disk dumping. With the option SW_WATCHDOG on, these operations are doomed to let watchdog fire, fi they take too long. I implemented the stubs this way because I really want wdog_kern_* KPI to not be dependant by SW_WATCHDOG being on (and really, the option only enables watchdog activation in hardclock) and also avoid to call them when not necessary (avoiding not-volountary watchdog activations). Sponsored by: Sandvine Incorporated Discussed with: emaste, des MFC after: 2 weeks
* This patch changes head so that the default NFS client is now the newrmacklem2011-04-271-2/+2
| | | | | | | | | | | | | | NFS client (which I guess is no longer experimental). The fstype "newnfs" is now "nfs" and the regular/old NFS client is now fstype "oldnfs". Although mounts via fstype "nfs" will usually work without userland changes, an updated mount_nfs(8) binary is needed for kernels built with "options NFSCL" but not "options NFSCLIENT". Updated mount_nfs(8) and mount(8) binaries are needed to do mounts for fstype "oldnfs". The GENERIC kernel configs have been changed to use options NFSCL and NFSD (the new client and server) instead of NFSCLIENT and NFSSERVER. For kernels being used on diskless NFS root systems, "options NFSCL" must be in the kernel config. Discussed on freebsd-fs@.
* Remove prototypes of non-existent functions.marcel2011-04-251-5/+0
|
* Switch the GENERIC kernels for all architectures to the new CAM-based ATAmav2011-04-241-10/+9
| | | | | | | | | | | | | stack. It means that all legacy ATA drivers are disabled and replaced by respective CAM drivers. If you are using ATA device names in /etc/fstab or other places, make sure to update them respectively (adX -> adaY, acdX -> cdY, afdX -> daY, astX -> saY, where 'Y's are the sequential numbers for each type in order of detection, unless configured otherwise with tunables, see cam(4)). ataraid(4) functionality is now supported by the RAID GEOM class. To use it you can load geom_raid kernel module and use graid(8) tool for management. Instead of /dev/arX device names, use /dev/raid/rX.
* Use the new arch_loadaddr I/F to align ELF objects to PBVM pagemarcel2011-04-031-1/+5
| | | | | | | | | | | | | | | | | | boundaries. For good measure, align all other objects to cache lines boundaries. Use the new arch_loadseg I/F to keep track of kernel text and data so that we can wire as much of it as is possible. It is the responsibility of the kernel to link critical (read IVT related) code and data at the front of the respective segment so that it's covered by TRs before the kernel has a chance to add more translations. Use a better way of determining whether we're loading a legacy kernel or not. We can't check for the presence of the PBVM page table, because we may have unloaded that kernel and loaded an older (legacy) kernel after that. Simply use the latest load address for it.
* Add support for executing the FreeBSD 1/i386 a.out binaries on amd64.kib2011-04-012-0/+20
| | | | | | | | | | | | | | | In particular: - implement compat shims for old stat(2) variants and ogetdirentries(2); - implement delivery of signals with ancient stack frame layout and corresponding sigreturn(2); - implement old getpagesize(2); - provide a user-mode trampoline and LDT call gate for lcall $7,$0; - port a.out image activator and connect it to the build as a module on amd64. The changes are hidden under COMPAT_43. MFC after: 1 month
OpenPOWER on IntegriCloud