summaryrefslogtreecommitdiffstats
path: root/include/asm-ia64
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] Configurable NODES_SHIFTYasunori Goto2006-04-111-20/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current implementations define NODES_SHIFT in include/asm-xxx/numnodes.h for each arch. Its definition is sometimes configurable. Indeed, ia64 defines 5 NODES_SHIFT values in the current git tree. But it looks a bit messy. SGI-SN2(ia64) system requires 1024 nodes, and the number of nodes already has been changeable by config. Suitable node's number may be changed in the future even if it is other architecture. So, I wrote configurable node's number. This patch set defines just default value for each arch which needs multi nodes except ia64. But, it is easy to change to configurable if necessary. On ia64 the number of nodes can be already configured in generic ia64 and SN2 config. But, NODES_SHIFT is defined for DIG64 and HP'S machine too. So, I changed it so that all platforms can be configured via CONFIG_NODES_SHIFT. It would be simpler. See also: http://marc.theaimsgroup.com/?l=linux-kernel&m=114358010523896&w=2 Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jack Steiner <steiner@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [IA64] Avoid "u64 foo : 32;" for gcc3 vs. gcc4 compatibilityTony Luck2006-03-311-3/+3
| | | | | | | | gcc3 thinks that a 32-bit field of a u64 type is itself a u64, so should be printed with "%ld". gcc4 thinks it needs just "%d". Make both versions happy by avoiding this construct. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Export cpu cache info by sysfsZhang, Yanmin2006-03-301-0/+28
| | | | | | | | | | | | | | | | The patch exports 8 attributes of cpu cache info under /sys/devices/system/cpu/cpuX/cache/indexX: 1) level 2) type 3) coherency_line_size 4) ways_of_associativity 5) size 6) shared_cpu_map 7) attributes 8) number_of_sets: number_of_sets=size/ways_of_associativity/coherency_line_size. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Merge branch 'release' of ↵Linus Torvalds2006-03-301-0/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] ioremap() should prefer WB over UC [IA64] Add __mca_table to the DISCARD list in gate.lds [IA64] Move __mca_table out of the __init section [IA64] simplify some condition checks in iosapic_check_gsi_range [IA64] correct some messages and fixes some minor things [IA64-SGI] fix for-loop in sn_hwperf_geoid_to_cnode() [IA64-SGI] sn_hwperf use of num_online_cpus() [IA64] optimize flush_tlb_range on large numa box [IA64] lazy_mmu_prot_update needs to be aware of huge pages
| * [IA64] Add __mca_table to the DISCARD list in gate.ldsJes Sorensen2006-03-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add __mca_table to the DISCARD list for the gate.lds linker script to avoid broken linker references when linking the final vmlinux file. Also add comment to include/asm-ia64/asmmacros.h to avoid anyone else hitting this problem in the future. Credits to James Bottomley <James.Bottomley@SteelEye.com> for spotting the DISCARD list in gate.lds.S Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [PATCH] Introduce sys_splice() system callJens Axboe2006-03-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for the sys_splice system call. Using a pipe as a transport, it can connect to files or sockets (latter as output only). From the splice.c comments: "splice": joining two ropes together by interweaving their strands. This is the "extended pipe" functionality, where a pipe is used as an arbitrary in-memory buffer. Think of a pipe as a small kernel buffer that you can use to transfer data from one end to the other. The traditional unix read/write is extended with a "splice()" operation that transfers data buffers to or from a pipe buffer. Named by Larry McVoy, original implementation from Linus, extended by Jens to support splicing to files and fixing the initial implementation bugs. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] Notifier chain update: API changesAlan Stern2006-03-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel's implementation of notifier chains is unsafe. There is no protection against entries being added to or removed from a chain while the chain is in use. The issues were discussed in this thread: http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2 We noticed that notifier chains in the kernel fall into two basic usage classes: "Blocking" chains are always called from a process context and the callout routines are allowed to sleep; "Atomic" chains can be called from an atomic context and the callout routines are not allowed to sleep. We decided to codify this distinction and make it part of the API. Therefore this set of patches introduces three new, parallel APIs: one for blocking notifiers, one for atomic notifiers, and one for "raw" notifiers (which is really just the old API under a new name). New kinds of data structures are used for the heads of the chains, and new routines are defined for registration, unregistration, and calling a chain. The three APIs are explained in include/linux/notifier.h and their implementation is in kernel/sys.c. With atomic and blocking chains, the implementation guarantees that the chain links will not be corrupted and that chain callers will not get messed up by entries being added or removed. For raw chains the implementation provides no guarantees at all; users of this API must provide their own protections. (The idea was that situations may come up where the assumptions of the atomic and blocking APIs are not appropriate, so it should be possible for users to handle these things in their own way.) There are some limitations, which should not be too hard to live with. For atomic/blocking chains, registration and unregistration must always be done in a process context since the chain is protected by a mutex/rwsem. Also, a callout routine for a non-raw chain must not try to register or unregister entries on its own chain. (This did happen in a couple of places and the code had to be changed to avoid it.) Since atomic chains may be called from within an NMI handler, they cannot use spinlocks for synchronization. Instead we use RCU. The overhead falls almost entirely in the unregister routine, which is okay since unregistration is much less frequent that calling a chain. Here is the list of chains that we adjusted and their classifications. None of them use the raw API, so for the moment it is only a placeholder. ATOMIC CHAINS ------------- arch/i386/kernel/traps.c: i386die_chain arch/ia64/kernel/traps.c: ia64die_chain arch/powerpc/kernel/traps.c: powerpc_die_chain arch/sparc64/kernel/traps.c: sparc64die_chain arch/x86_64/kernel/traps.c: die_chain drivers/char/ipmi/ipmi_si_intf.c: xaction_notifier_list kernel/panic.c: panic_notifier_list kernel/profile.c: task_free_notifier net/bluetooth/hci_core.c: hci_notifier net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_chain net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_expect_chain net/ipv6/addrconf.c: inet6addr_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_expect_chain net/netlink/af_netlink.c: netlink_chain BLOCKING CHAINS --------------- arch/powerpc/platforms/pseries/reconfig.c: pSeries_reconfig_chain arch/s390/kernel/process.c: idle_chain arch/x86_64/kernel/process.c idle_notifier drivers/base/memory.c: memory_chain drivers/cpufreq/cpufreq.c cpufreq_policy_notifier_list drivers/cpufreq/cpufreq.c cpufreq_transition_notifier_list drivers/macintosh/adb.c: adb_client_list drivers/macintosh/via-pmu.c sleep_notifier_list drivers/macintosh/via-pmu68k.c sleep_notifier_list drivers/macintosh/windfarm_core.c wf_client_list drivers/usb/core/notify.c usb_notifier_list drivers/video/fbmem.c fb_notifier_list kernel/cpu.c cpu_chain kernel/module.c module_notify_list kernel/profile.c munmap_notifier kernel/profile.c task_exit_notifier kernel/sys.c reboot_notifier_list net/core/dev.c netdev_chain net/decnet/dn_dev.c: dnaddr_chain net/ipv4/devinet.c: inetaddr_chain It's possible that some of these classifications are wrong. If they are, please let us know or submit a patch to fix them. Note that any chain that gets called very frequently should be atomic, because the rwsem read-locking used for blocking chains is very likely to incur cache misses on SMP systems. (However, if the chain's callout routines may sleep then the chain cannot be atomic.) The patch set was written by Alan Stern and Chandra Seetharaman, incorporating material written by Keith Owens and suggestions from Paul McKenney and Andrew Morton. [jes@sgi.com: restructure the notifier chain initialization macros] Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] ia64: add ptr_to_compat()Ingo Molnar2006-03-271-0/+6
| | | | | | | | | | | | | | | | | | Add ptr_to_compat() to ia64 - needed by the robust-futex code. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] unify pfn_to_page: ia64 pfn_to_pageKAMEZAWA Hiroyuki2006-03-271-5/+13
|/ | | | | | | | | | ia64 has special config CONFIG_VIRTUAL_MEM_MAP. CONFIG_DISCONTIGMEM=y && CONFIG_VIRTUAL_MEM_MAP!=y is bug ? Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] bitops: ia64: use generic bitopsAkinobu Mita2006-03-261-50/+17
| | | | | | | | | | | | | - remove generic_fls64() - remove find_{next,first}{,_zero}_bit() - remove ext2_{set,clear,test,find_first_zero,find_next_zero}_bit() - remove minix_{test,set,test_and_clear,test,find_first_zero}_bit() - remove sched_find_first_bit() Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] bitops: use non atomic operations for minix_*_bit() and ext2_*_bit()Akinobu Mita2006-03-261-5/+5
| | | | | | | | | | | | | | | | Bitmap functions for the minix filesystem and the ext2 filesystem except ext2_set_bit_atomic() and ext2_clear_bit_atomic() do not require the atomic guarantees. But these are defined by using atomic bit operations on several architectures. (cris, frv, h8300, ia64, m32r, m68k, m68knommu, mips, s390, sh, sh64, sparc, sparc64, v850, and xtensa) This patch switches to non atomic bit operation. Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] EFI: keep physical table addresses in efi structureBjorn Helgaas2006-03-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Almost all users of the table addresses from the EFI system table want physical addresses. So rather than doing the pa->va->pa conversion, just keep physical addresses in struct efi. This fixes a DMI bug: the efi structure contained the physical SMBIOS address on x86 but the virtual address on ia64, so dmi_scan_machine() used ioremap() on a virtual address on ia64. This is essentially the same as an earlier patch by Matt Tolentino: http://marc.theaimsgroup.com/?l=linux-kernel&m=112130292316281&w=2 except that this changes all table addresses, not just ACPI addresses. Matt's original patch was backed out because it caused MCAs on HP sx1000 systems. That problem is resolved by the ioremap() attribute checking added for ia64. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ia64: ioremap: check EFI for valid memory attributesBjorn Helgaas2006-03-261-13/+2
| | | | | | | | | | | | | | | Check the EFI memory map so we can use the correct memory attributes for ioremap(). Previously, we always used uncacheable access, which blows up on some machines for regular system memory. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] EFI, /dev/mem: simplify efi_mem_attribute_range()Bjorn Helgaas2006-03-261-2/+2
| | | | | | | | | | | | | | | | | | Pass the size, not a pointer to the size, to efi_mem_attribute_range(). This function validates memory regions for the /dev/mem read/write/mmap paths. The pointer allows arches to reduce the size of the range, but I think that's unnecessary complexity. Simplifying it will let me use efi_mem_attribute_range() to improve the ia64 ioremap() implementation. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ia64: use i386 dmi_scan.cMatt Domsch2006-03-262-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | Enable DMI table parsing on ia64. Andi Kleen has a patch in his x86_64 tree which enables the use of i386 dmi_scan.c on x86_64. dmi_scan.c functions are being used by the drivers/char/ipmi/ipmi_si_intf.c driver for autodetecting the ports or memory spaces where the IPMI controllers may be found. This patch adds equivalent changes for ia64 as to what is in the x86_64 tree. In addition, I reworked the DMI detection, such that on EFI-capable systems, it uses the efi.smbios pointer to find the table, rather than brute-force searching from 0xF0000. On non-EFI systems, it continues the brute-force search. My test system, an Intel S870BN4 'Tiger4', aka Dell PowerEdge 7250, with latest BIOS, does not list the IPMI controller in the ACPI namespace, nor does it have an ACPI SPMI table. Also note, currently shipping Dell x8xx EM64T servers don't have these either, so DMI is the only method for obtaining the address of the IPMI controller. Signed-off-by: Matt Domsch <Matt_Domsch@dell.com> Acked-by: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge branch 'release' of ↵Linus Torvalds2006-03-2513-25/+47
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] New IA64 core/thread detection patch [IA64] Increase max node count on SN platforms [IA64] Increase max node count on SN platforms [IA64] Increase max node count on SN platforms [IA64] Increase max node count on SN platforms [IA64] Tollhouse HP: IA64 arch changes [IA64] cleanup dig_irq_init [IA64] MCA recovery: kernel context recovery table IA64: Use early_parm to handle mvec_name and nomca [IA64] move patchlist and machvec into init section [IA64] add init declaration - nolwsys [IA64] add init declaration - gate page functions [IA64] add init declaration to memory initialization functions [IA64] add init declaration to cpu initialization functions [IA64] add __init declaration to mca functions [IA64] Ignore disabled Local SAPIC Affinity Structure in SRAT [IA64] sn_check_intr: use ia64_get_irr() [IA64] fix ia64 is_hugepage_only_range
| * [IA64] New IA64 core/thread detection patchFenghua Yu2006-03-241-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | IPF SDM 2.2 changes definition of PAL_LOGICAL_TO_PHYSICAL to add proc_number=-1 to get core/thread mapping info on the running processer. Based on this change, we had better to update existing core/thread detection in IA64 kernel correspondingly. The attached patch implements this change. It simplifies detection code and eliminates potential race condition. It also runs a bit faster and has better scalability especially when cores and threads number grows up in one package. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Increase max node count on SN platformsJack Steiner2006-03-241-1/+1
| | | | | | | | | | | | | | | | | | Node number are kept in the cpu_to_node_map which is currently defined as u8. Change to u16 to accomodate larger node numbers. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Increase max node count on SN platformsJack Steiner2006-03-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support in IA64 acpi for platforms that support more than 256 nodes. Currently, ACPI is limited to 256 nodes because the proximity domain number is 8-bits. Long term, we expect to use ACPI3.0 to support >256 nodes. This patch is an interim solution that works with platforms that pass the high order bits of the proximity domain in "reserved" fields of the ACPI tables. This code is enabled ONLY on SN platforms. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Increase max node count on SN platformsJack Steiner2006-03-241-4/+9
| | | | | | | | | | | | | | | | | | Add a configuration option to allow the maximum number of nodes to be configurable for GENERIC or SN kernels. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Tollhouse HP: IA64 arch changesPrarit Bhargava2006-03-245-14/+20
| | | | | | | | | | | | | | | | | | arch/ia64/sn and include/asm-ia64/sn changes required to support Tollhouse system PCI hotplug, fixes the ia64_sn_sysctl_ioboard_get call, and introduces the PRF_HOTPLUG_SUPPORT feature bit. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] cleanup dig_irq_initChen, Kenneth W2006-03-241-2/+0
| | | | | | | | | | | | | | | | dig_irq_init is equivalent to machvec_noop, no need to define another empty function. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] MCA recovery: kernel context recovery tableRuss Anderson2006-03-241-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory errors encountered by user applications may surface when the CPU is running in kernel context. The current code will not attempt recovery if the MCA surfaces in kernel context (privilage mode 0). This patch adds a check for cases where the user initiated the load that surfaces in kernel interrupt code. An example is a user process lauching a load from memory and the data in memory had bad ECC. Before the bad data gets to the CPU register, and interrupt comes in. The code jumps to the IVT interrupt entry point and begins execution in kernel context. The process of saving the user registers (SAVE_REST) causes the bad data to be loaded into a CPU register, triggering the MCA. The MCA surfaces in kernel context, even though the load was initiated from user context. As suggested by David and Tony, this patch uses an exception table like approach, puting the tagged recovery addresses in a searchable table. One difference from the exception table is that MCAs do not surface in precise places (such as with a TLB miss), so instead of tagging specific instructions, address ranges are registers. A single macro is used to do the tagging, with the input parameter being the label of the starting address and the macro being the ending address. This limits clutter in the code. This patch only tags one spot, the interrupt ivt entry. Testing showed that spot to be a "heavy hitter" with MCAs surfacing while saving user registers. Other spots can be added as needed by adding a single macro. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] add init declaration to cpu initialization functionsChen, Kenneth W2006-03-221-1/+0
| | | | | | | | | | | | | | Add init declaration to cpu initialization functions. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] fix ia64 is_hugepage_only_rangeChen, Kenneth W2006-03-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | fix is_hugepage_only_range() definition to be "overlaps" instead of "within architectural restricted hugetlb address range". Simplify the ia64 specific code that used to use is_hugepage_only_range() to just check which region the address is in. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [PATCH] POLLRDHUP/EPOLLRDHUP handling for half-closed devices notificationsDavide Libenzi2006-03-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement the half-closed devices notifiation, by adding a new POLLRDHUP (and its alias EPOLLRDHUP) bit to the existing poll/select sets. Since the existing POLLHUP handling, that does not report correctly half-closed devices, was feared to be changed, this implementation leaves the current POLLHUP reporting unchanged and simply add a new bit that is set in the few places where it makes sense. The same thing was discussed and conceptually agreed quite some time ago: http://lkml.org/lkml/2003/7/12/116 Since this new event bit is added to the existing Linux poll infrastruture, even the existing poll/select system calls will be able to use it. As far as the existing POLLHUP handling, the patch leaves it as is. The pollrdhup-2.6.16.rc5-0.10.diff defines the POLLRDHUP for all the existing archs and sets the bit in the six relevant files. The other attached diff is the simple change required to sys/epoll.h to add the EPOLLRDHUP definition. There is "a stupid program" to test POLLRDHUP delivery here: http://www.xmailserver.org/pollrdhup-test.c It tests poll(2), but since the delivery is same epoll(2) will work equally. Signed-off-by: Davide Libenzi <davidel@xmailserver.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] abstract type/size specification for assemblyJan Beulich2006-03-241-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide abstraction for generating type and size information of assembly routines and data, while permitting architectures to override these defaults. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: "Russell King" <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "Andi Kleen" <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] s/;;/;/gAlexey Dobriyan2006-03-241-1/+1
| | | | | | | | | | | | Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] kill include/linux/platform.h, default_idle() cleanupAdrian Bunk2006-03-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | include/linux/platform.h contained nothing that was actually used except the default_idle() prototype, and is therefore removed by this patch. This patch does the following with the platform specific default_idle() functions on different architectures: - remove the unused function: - parisc - sparc64 - make the needlessly global function static: - arm - h8300 - m68k - m68knommu - s390 - v850 - x86_64 - add a prototype in asm/system.h: - cris - i386 - ia64 Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Patrick Mochel <mochel@digitalimplant.org> Acked-by: Kyle McMartin <kyle@parisc-linux.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] atomic: add_unless cmpxchg optimiseNick Piggin2006-03-231-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without branch hints, the very unlikely chance of the loop repeating due to cmpxchg failure is unrolled with gcc-4 that I have tested. Improve this for architectures with a native cas/cmpxchg. llsc archs should try to implement this natively. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] Move read_mostly definition to asm/cache.hKyle McMartin2006-03-231-0/+2
|/ | | | | | | | | | | | | | Seems like needless clutter having a bunch of #if defined(CONFIG_$ARCH) in include/linux/cache.h. Move the per architecture section definition to asm/cache.h, and keep the if-not-defined dummy case in linux/cache.h to catch architectures which don't implement the section. Verified that symbols still go in .data.read_mostly on parisc, and the compile doesn't break. Signed-off-by: Kyle McMartin <kyle@parisc-linux.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: is_aligned_hugepage_range() cleanupDavid Gibson2006-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Move hugetlb_free_pgd_range() prototype to hugetlb.hDavid Gibson2006-03-221-3/+0
| | | | | | | | | | | | | | | The optional hugepage callback, hugetlb_free_pgd_range() is presently implemented non-trivially only on ia64 (but I plan to add one for powerpc shortly). It has its own prototype for the function in asm-ia64/pgtable.h. However, since the function is called from generic code, it make sense for its prototype to be in the generic hugetlb.h header file, as the protypes other arch callbacks already are (prepare_hugepage_range(), set_huge_pte_at(), etc.). This patch makes it so. Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Fix hugepage logic in free_pgtables()David Gibson2006-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | free_pgtables() has special logic to call hugetlb_free_pgd_range() instead of the normal free_pgd_range() on hugepage VMAs. However, the test it uses to do so is incorrect: it calls is_hugepage_only_range on a hugepage sized range at the start of the vma. is_hugepage_only_range() will return true if the given range has any intersection with a hugepage address region, and in this case the given region need not be hugepage aligned. So, for example, this test can return true if called on, say, a 4k VMA immediately preceding a (nicely aligned) hugepage VMA. At present we get away with this because the powerpc version of hugetlb_free_pgd_range() is just a call to free_pgd_range(). On ia64 (the only other arch with a non-trivial is_hugepage_only_range()) we get away with it for a different reason; the hugepage area is not contiguous with the rest of the user address space, and VMAs are not permitted in between, so the test can't return a false positive there. Nonetheless this should be fixed. We do that in the patch below by replacing the is_hugepage_only_range() test with an explicit test of the VMA using is_vm_hugetlb_page(). This in turn changes behaviour for platforms where is_hugepage_only_range() returns false always (everything except powerpc and ia64). We address this by ensuring that hugetlb_free_pgd_range() is defined to be identical to free_pgd_range() (instead of a no-op) on everything except ia64. Even so, it will prevent some otherwise possible coalescing of calls down to free_pgd_range(). Since this only happens for hugepage VMAs, removing this small optimization seems unlikely to cause any trouble. This patch causes no regressions on the libhugetlbfs testsuite - ppc64 POWER5 (8-way), ppc64 G5 (2-way) and i386 Pentium M (UP). Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Enable mprotect on huge pagesZhang, Yanmin2006-03-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb mprotect. From: David Gibson <david@gibson.dropbear.id.au> Remove a test from the mprotect() path which checks that the mprotect()ed range on a hugepage VMA is hugepage aligned (yes, really, the sense of is_aligned_hugepage_range() is the opposite of what you'd guess :-/). In fact, we don't need this test. If the given addresses match the beginning/end of a hugepage VMA they must already be suitably aligned. If they don't, then mprotect_fixup() will attempt to split the VMA. The very first test in split_vma() will check for a badly aligned address on a hugepage VMA and return -EINVAL if necessary. From: "Chen, Kenneth W" <kenneth.w.chen@intel.com> On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE. The identify of hugetlb pte is lost when changing page protection via mprotect. A page fault occurs later will trigger a bug check in huge_pte_alloc(). The fix is to always make new pte a hugetlb pte and also to clean up legacy code where _PAGE_PRESENT is forced on in the pre-faulting day. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Pull sn2-reduce-kmalloc-wrap into release branchTony Luck2006-03-211-22/+0
|\
| * [IA64-SGI] SN2-XP reduce kmalloc wrapper inliningJes Sorensen2006-02-271-22/+0
| | | | | | | | | | | | | | | | | | Take advantage of kzalloc() as well as reduce the size of code generated for the error returns in xpc_setup_infrastructure(). Signed-off-by: Jes Sorensen <jes@sgi.com> Acked-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | Pull icc-cleanup into release branchTony Luck2006-03-212-168/+22
|\ \
| * | [IA64-SGI] - Eliminate SN pio_phys_xxx macros. Move to assemblyJack Steiner2006-02-071-51/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rewrite the SN pio_phys_xxx macros in assembly language. This avoids issues with the Intel icc compiler. Function call overhead is not an issue - the functions reference PIOs and take 100's nsec to complete. In addition, the functions should likely be in assembly language anyway - they reference memory using physical addressing mode. One function executes with psr.ic disabled. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] use icc defined constantChen, Kenneth W2006-02-071-10/+10
| | | | | | | | | | | | | | | | | | | | | Use icc defined constant instead of magic number. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] add __builtin_trap definition for icc buildChen, Kenneth W2006-02-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Map __builtin_trap function to break 0 instruction. Signed-off-by: HJ Lu <hongjiu.lu@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] clean up asm/intel_intrin.hChen, Kenneth W2006-02-071-106/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Include intrinsic header file from icc compiler. Remove duplicate definition from kernel source. Signed-off-by: HJ Lu <hongjiu.lu@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] map ia64_hint definition to intel compiler intrinsicChen, Kenneth W2006-02-071-1/+2
| | | | | | | | | | | | | | | | | | | | | Map ia64_hint() to internal intel compiler intrinsic. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | Pull sn2-mmio-writes into release branchTony Luck2006-03-215-2/+26
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | Hand-fixed conflicts: include/asm-ia64/machvec_sn2.h Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | [IA64] hooks to wait for mmio writes to drain when migrating processesBrent Casavant2006-01-265-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On SN2, MMIO writes which are issued from separate processors are not guaranteed to arrive in any particular order at the IO hardware. When performing such writes from the kernel this is not a problem, as a kernel thread will not migrate to another CPU during execution, and mmiowb() calls can guarantee write ordering when control of the IO resource is allowed to move between threads. However, when MMIO writes can be performed from user space (e.g. DRM) there are no such guarantees and mechanisms, as the process may context-switch at any time, and may migrate to a different CPU as part of the switch. For such programs/hardware to operate correctly, it is required that the MMIO writes from the old CPU be accepted by the IO hardware before subsequent writes from the new CPU can be issued. The following patch implements this behavior on SN2 by waiting for a Shub register to indicate that these writes have been accepted. This is placed in the context switch-in path, and only performs the wait when the newly scheduled task changes CPUs. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Brent Casavant <bcasavan@sgi.com>
* | | | Pull altix-ce1.0-asic into release branchTony Luck2006-03-212-2/+42
|\ \ \ \
| * | | | [IA64-SGI] driver bugfixes and hardware workarounds for CE1.0 asicMark Maule2006-01-262-2/+42
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various bugfixes and hardware bug workarounds necessary for the rev 1.0 version of the altix TIO CE asic. Signed-off-by: Mark Maule <maule@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | | Pull delete-sigdelayed into release branchTony Luck2006-03-212-12/+1
|\ \ \ \
| * | | | [IA64] Delete MCA/INIT sigdelayed codeKeith Owens2006-01-262-12/+1
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only user of the MCA/INIT sigdelayed code (SGI's I/O probing) has moved from the kernel into SAL. Delete the MCA/INIT sigdelayed code. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | | Pull ia64-mutex-primitives into release branchTony Luck2006-03-211-5/+88
|\ \ \ \
OpenPOWER on IntegriCloud