summaryrefslogtreecommitdiffstats
path: root/lib
Commit message (Collapse)AuthorAgeFilesLines
* radix-tree: fix small lockless radix-tree bugNick Piggin2008-06-121-58/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We shrink a radix tree when its root node has only one child, in the left most slot. The child becomes the new root node. To perform this operation in a manner compatible with concurrent lockless lookups, we atomically switch the root pointer from the parent to its child. However a concurrent lockless lookup may now have loaded a pointer to the parent (and is presently deciding what to do next). For this reason, we also have to keep the parent node in a valid state after shrinking the tree, until the next RCU grace period -- otherwise this lookup with the parent pointer may not do the right thing. Notably, we need to keep the child in the left most slot there in case that is requested by the lookup. This is all pretty standard RCU stuff. It is worth repeating because in my eagerness to obey the radix tree node constructor scheme, I had broken it by zeroing the radix tree node before the grace period. What could happen is that a lookup can load the parent pointer, then decide it wants to follow the left most child slot, only to find the slot contained NULL due to the concurrent shrinker having zeroed the parent node before waiting for a grace period. The lookup would return a false negative as a result. Fix it by doing that clearing in the RCU callback. I would normally want to rip out the constructor entirely, but radix tree nodes are one of those places where they make sense (only few cachelines will be touched soon after allocation). This was never actually found in any lockless pagecache testing or by the test harness, but by seeing the odd problem with my scalable vmap rewrite. I have not tickled the test harness into reproducing it yet, but I'll keep working at it. Fortunately, it is not a problem anywhere lockless pagecache is used in mainline kernels (pagecache probe is not a guarantee, and brd does not have concurrent lookups and deletes). Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* add an inlined version of iter_div_u64_remJeremy Fitzhardinge2008-06-121-14/+1
| | | | | | | | | iter_div_u64_rem is used in the x86-64 vdso, which cannot call other kernel code. For this case, provide the always_inlined version, __iter_div_u64_rem. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* common implementation of iterative div/modJeremy Fitzhardinge2008-06-121-0/+23
| | | | | | | | | | | | | | | | | | | | | | We have a few instances of the open-coded iterative div/mod loop, used when we don't expcet the dividend to be much bigger than the divisor. Unfortunately modern gcc's have the tendency to strength "reduce" this into a full mod operation, which isn't necessarily any faster, and even if it were, doesn't exist if gcc implements it in libgcc. The workaround is to put a dummy asm statement in the loop to prevent gcc from performing the transformation. This patch creates a single implementation of this loop, and uses it to replace the open-coded versions I know about. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Segher Boessenkool <segher@kernel.crashing.org> Cc: Christian Kujau <lists@nerdbynature.de> Cc: Robert Hancock <hancockr@shaw.ca> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* lib: export bitrev16Harvey Harrison2008-06-061-1/+2
| | | | | | | | | | | Bluetooth will be able to use this. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Dave Young <hidave.darkstar@gmail.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lmb: Fix compile warningKumar Gala2008-05-181-1/+2
| | | | | | | lib/lmb.c: In function 'lmb_dump_all': lib/lmb.c:51: warning: format '%lx' expects type 'long unsigned int', but argument 2 has type 'u64' Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2008-05-141-12/+20
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs * 'for-linus' of ssh://master.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs: 9p: fix error path during early mount 9p: make cryptic unknown error from server less scary 9p: fix flags length in net 9p: Correct fidpool creation failure in p9_client_create 9p: use struct mutex instead of struct semaphore 9p: propagate parse_option changes to client and transports fs/9p/v9fs.c (v9fs_parse_options): Handle kstrdup and match_strdup failure. 9p: Documentation updates add match_strlcpy() us it to make v9fs make uname and remotename parsing more robust
| * add match_strlcpy() us it to make v9fs make uname and remotename parsing ↵Markus Armbruster2008-05-141-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more robust match_strcpy() is a somewhat creepy function: the caller needs to make sure that the destination buffer is big enough, and when he screws up or forgets, match_strcpy() happily overruns the buffer. There's exactly one customer: v9fs_parse_options(). I believe it currently can't overflow its buffer, but that's not exactly obvious. The source string is a substing of the mount options. The kernel silently truncates those to PAGE_SIZE bytes, including the terminating zero. See compat_sys_mount() and do_mount(). The destination buffer is obtained from __getname(), which allocates from name_cachep, which is initialized by vfs_caches_init() for size PATH_MAX. We're safe as long as PATH_MAX <= PAGE_SIZE. PATH_MAX is 4096. As far as I know, the smallest PAGE_SIZE is also 4096. Here's a patch that makes the code a bit more obviously correct. It doesn't depend on PATH_MAX <= PAGE_SIZE. Signed-off-by: Markus Armbruster <armbru@redhat.com> Cc: Latchesar Ionkov <lucho@ionkov.net> Cc: Jim Meyering <meyering@redhat.com> Cc: "Randy.Dunlap" <rdunlap@xenotime.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2008-05-141-15/+30
|\ \ | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Use a TS_RESTORE_SIGMASK lmb: Make lmb debugging more useful. lmb: Fix inconsistent alignment of size argument. sparc: Fix mremap address range validation.
| * | lmb: Make lmb debugging more useful.David S. Miller2008-05-121-11/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Having to muck with the build and set DEBUG just to get lmb_dump_all() to print things isn't very useful. So use pr_info() and use an early boot param "lmb=debug" so we can simply ask users to reboot with this option when we need some debugging from them. Signed-off-by: David S. Miller <davem@davemloft.net>
| * | lmb: Fix inconsistent alignment of size argument.David S. Miller2008-05-121-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When allocating, if we will align up the size when making the reservation, we should also align the size for the check that the space is actually available. The simplest thing is to just aling the size up from the beginning, then we can use plain 'size' throughout. Signed-off-by: David S. Miller <davem@davemloft.net>
* | | lib: create common ascii hex arrayHarvey Harrison2008-05-141-2/+5
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a common hex array in hexdump.c so everyone can use it. Add a common hi/lo helper to avoid the shifting masking that is done to get the upper and lower nibbles of a byte value. Pull the pack_hex_byte helper from kgdb as it is opencoded many places in the tree that will be consolidated. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | cpumask: remove bitmap_scnprintf_len and cpumask_scnprintf_lenPaul Jackson2008-05-131-16/+0
|/ | | | | | | | | | | | | | | They aren't used. They were briefly used as part of some other patches to provide an alternative format for displaying some /proc and /sys cpumasks. They probably should have been removed when those other patches were dropped, in favor of a different solution. Signed-off-by: Paul Jackson <pj@sgi.com> Cc: "Mike Travis" <travis@sgi.com> Cc: "Bert Wesarg" <bert.wesarg@googlemail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* BKL: revert back to the old spinlock implementationLinus Torvalds2008-05-101-39/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The generic semaphore rewrite had a huge performance regression on AIM7 (and potentially other BKL-heavy benchmarks) because the generic semaphores had been rewritten to be simple to understand and fair. The latter, in particular, turns a semaphore-based BKL implementation into a mess of scheduling. The attempt to fix the performance regression failed miserably (see the previous commit 00b41ec2611dc98f87f30753ee00a53db648d662 'Revert "semaphore: fix"'), and so for now the simple and sane approach is to instead just go back to the old spinlock-based BKL implementation that never had any issues like this. This patch also has the advantage of being reported to fix the regression completely according to Yanmin Zhang, unlike the semaphore hack which still left a couple percentage point regression. As a spinlock, the BKL obviously has the potential to be a latency issue, but it's not really any different from any other spinlock in that respect. We do want to get rid of the BKL asap, but that has been the plan for several years. These days, the biggest users are in the tty layer (open/release in particular) and Alan holds out some hope: "tty release is probably a few months away from getting cured - I'm afraid it will almost certainly be the very last user of the BKL in tty to get fixed as it depends on everything else being sanely locked." so while we're not there yet, we do have a plan of action. Tested-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <andi@firstfloor.org> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Alexander Viro <viro@ftp.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'powerpc-next' of ↵Linus Torvalds2008-05-051-1/+1
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'powerpc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: [POWERPC] Assign PDE->data before gluing PDE into /proc tree [POWERPC] devres: Add devm_ioremap_prot() [POWERPC] macintosh: ADB driver: adb_handler_sem semaphore to mutex [POWERPC] macintosh: windfarm_smu_sat: semaphore to mutex [POWERPC] macintosh: therm_pm72: driver_lock semaphore to mutex
| * [POWERPC] devres: Add devm_ioremap_prot()Emil Medve2008-05-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We provide an ioremap_flags, so this provides a corresponding devm_ioremap_prot. The slight name difference is at Ben Herrenschmidt's request as he plans on changing ioremap_flags to ioremap_prot in the future. Signed-off-by: Emil Medve <Emilian.Medve@Freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Tejun Heo <htejun@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | kgdb: kconfig fix xconfig/menuconfig elementJan Engelhardt2008-05-051-7/+9
|/ | | | | | | Kconfig.kgdb: fix menuconfig element Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
* idr: fix idr_remove()Nadia Derbey2008-05-011-1/+1
| | | | | | | | | | | The return inside the loop makes us free only a single layer. Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Jim Houston <jim.houston@comcast.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Add a new sysfs_streq() string comparison functionDavid Brownell2008-05-011-0/+27
| | | | | | | | | | | | | | | | | | Add a new sysfs_streq() string comparison function, which ignores the trailing newlines found in sysfs inputs. By example: sysfs_streq("a", "b") ==> false sysfs_streq("a", "a") ==> true sysfs_streq("a", "a\n") ==> true sysfs_streq("a\n", "a") ==> true This is intended to simplify parsing of sysfs inputs, letting them avoid the need to manually strip off newlines from inputs. Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Acked-by: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* rename div64_64 to div64_u64Roman Zippel2008-05-011-6/+6
| | | | | | | | | | | | | | | | | | | | | Rename div64_64 to div64_u64 to make it consistent with the other divide functions, so it clearly includes the type of the divide. Move its definition to math64.h as currently no architecture overrides the generic implementation. They can still override it of course, but the duplicated declarations are avoided. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: Avi Kivity <avi@qumranet.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* introduce explicit signed/unsigned 64bit divideRoman Zippel2008-05-011-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | | The current do_div doesn't explicitly say that it's unsigned and the signed counterpart is missing, which is e.g. needed when dealing with time values. This introduces 64bit signed/unsigned divide functions which also attempts to cleanup the somewhat awkward calling API, which often requires the use of temporary variables for the dividend. To avoid the need for temporary variables everywhere for the remainder, each divide variant also provides a version which doesn't return the remainder. Each architecture can now provide optimized versions of these function, otherwise generic fallback implementations will be used. As an example I provided an alternative for the current x86 divide, which avoids the asm casts and using an union allows gcc to generate better code. It also avoids the upper divde in a few more cases, where the result is known (i.e. upper quotient is zero). Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* klist: fix coding style errors in klist.h and klist.cGreg Kroah-Hartman2008-04-301-121/+85
| | | | | | Finally clean up the odd spacing in these files. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* devres: support addresses greater than an unsigned long via dev_ioremapKumar Gala2008-04-301-2/+2
| | | | | | | | | | | | Use a resource_size_t instead of unsigned long since some arch's are capable of having ioremap deal with addresses greater than the size of a unsigned long. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Cc: Tejun Heo <htejun@gmail.com> Cc: Jeff Garzik <jgarzik@pobox.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* kobject: do not copy vargs, just pass them aroundKay Sievers2008-04-301-20/+8
| | | | | | | | This prevents a few unneeded copies. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* klist: implement klist_add_{after|before}()Tejun Heo2008-04-301-0/+33
| | | | | | | | | | | | Add klist_add_after() and klist_add_before() which puts a new node after and before an existing node, respectively. This is useful for callers which need to keep klist ordered. Note that synchronizing between simultaneous additions for ordering is the caller's responsibility. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* lib: replace remaining __FUNCTION__ occurrencesHarvey Harrison2008-04-302-13/+13
| | | | | | | | __FUNCTION__ is gcc specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* debugobjects: add timer specific object debugging codeThomas Gleixner2008-04-301-0/+8
| | | | | | | | | | | | | | Add calls to the generic object debugging infrastructure and provide fixup functions which allow to keep the system alive when recoverable problems have been detected by the object debugging core code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Greg KH <greg@kroah.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* infrastructure to debug (dynamic) objectsThomas Gleixner2008-04-303-0/+914
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can see an ever repeating problem pattern with objects of any kind in the kernel: 1) freeing of active objects 2) reinitialization of active objects Both problems can be hard to debug because the crash happens at a point where we have no chance to decode the root cause anymore. One problem spot are kernel timers, where the detection of the problem often happens in interrupt context and usually causes the machine to panic. While working on a timer related bug report I had to hack specialized code into the timer subsystem to get a reasonable hint for the root cause. This debug hack was fine for temporary use, but far from a mergeable solution due to the intrusiveness into the timer code. The code further lacked the ability to detect and report the root cause instantly and keep the system operational. Keeping the system operational is important to get hold of the debug information without special debugging aids like serial consoles and special knowledge of the bug reporter. The problems described above are not restricted to timers, but timers tend to expose it usually in a full system crash. Other objects are less explosive, but the symptoms caused by such mistakes can be even harder to debug. Instead of creating specialized debugging code for the timer subsystem a generic infrastructure is created which allows developers to verify their code and provides an easy to enable debug facility for users in case of trouble. The debugobjects core code keeps track of operations on static and dynamic objects by inserting them into a hashed list and sanity checking them on object operations and provides additional checks whenever kernel memory is freed. The tracked object operations are: - initializing an object - adding an object to a subsystem list - deleting an object from a subsystem list Each operation is sanity checked before the operation is executed and the subsystem specific code can provide a fixup function which allows to prevent the damage of the operation. When the sanity check triggers a warning message and a stack trace is printed. The list of operations can be extended if the need arises. For now it's limited to the requirements of the first user (timers). The core code enqueues the objects into hash buckets. The hash index is generated from the address of the object to simplify the lookup for the check on kfree/vfree. Each bucket has it's own spinlock to avoid contention on a global lock. The debug code can be compiled in without being active. The runtime overhead is minimal and could be optimized by asm alternatives. A kernel command line option enables the debugging code. Thanks to Ingo Molnar for review, suggestions and cleanup patches. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Greg KH <greg@kroah.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: bdi: allow setting a maximum for the bdi dirty limitPeter Zijlstra2008-04-301-6/+32
| | | | | | | | | | | | | | | | | Add "max_ratio" to /sys/class/bdi. This indicates the maximum percentage of the global dirty threshold allocated to this bdi. [mszeredi@suse.cz] - fix parsing in max_ratio_store(). - export bdi_set_max_ratio() to modules - limit bdi_dirty with bdi->max_ratio - document new sysfs attribute Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: bdi: export BDI attributes in sysfsPeter Zijlstra2008-04-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a place in sysfs (/sys/class/bdi) for the backing_dev_info object. This allows us to see and set the various BDI specific variables. In particular this properly exposes the read-ahead window for all relevant users and /sys/block/<block>/queue/read_ahead_kb should be deprecated. With patient help from Kay Sievers and Greg KH [mszeredi@suse.cz] - split off NFS and FUSE changes into separate patches - document new sysfs attributes under Documentation/ABI - do bdi_class_init as a core_initcall, otherwise the "default" BDI won't be initialized - remove bdi_init_fmt macro, it's not used very much [akpm@linux-foundation.org: fix ia64 warning] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Kay Sievers <kay.sievers@vrfy.org> Acked-by: Greg KH <greg@kroah.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'master' of ↵Linus Torvalds2008-04-291-9/+90
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: [RAPIDIO] Change RapidIO doorbell source and target ID field to 16-bit [RAPIDIO] Add RapidIO connection info print out and re-training for broken connections [RAPIDIO] Add serial RapidIO controller support, which includes MPC8548, MPC8641 [RAPIDIO] Add RapidIO node probing into MPC86xx_HPCN board id table [RAPIDIO] Add RapidIO node into MPC8641HPCN dts file [RAPIDIO] Auto-probe the RapidIO system size [RAPIDIO] Add OF-tree support to RapidIO controller driver [RAPIDIO] Add RapidIO multi mport support [RAPIDIO] Move include/asm-ppc/rio.h to asm-powerpc [RAPIDIO] Add RapidIO option to kernel configuration [RAPIDIO] Change RIO function mpc85xx_ to fsl_ [POWERPC] Provide walk_memory_resource() for powerpc [POWERPC] Update lmb data structures for hotplug memory add/remove [POWERPC] Hotplug memory remove notifications for powerpc [POWERPC] windfarm: Add PowerMac 12,1 support [POWERPC] Fix building of pmac32 when CONFIG_NVRAM=m [POWERPC] Add IRQSTACKS support on ppc32 [POWERPC] Use __always_inline for xchg* and cmpxchg* [POWERPC] Add fast little-endian switch system call
| * [POWERPC] Provide walk_memory_resource() for powerpcBadari Pulavarty2008-04-291-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | Provide walk_memory_resource() for 64-bit powerpc. PowerPC maintains logical memory region mapping in the lmb.memory structure. Walk through these structures and do the callbacks for the contiguous chunks. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Update lmb data structures for hotplug memory add/removeBadari Pulavarty2008-04-291-9/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The powerpc kernel maintains information about logical memory blocks in the lmb.memory structure, which is initialized and updated at boot time, but not when memory is added or removed while the kernel is running. This adds a hotplug memory notifier which updates lmb.memory when memory is added or removed. This information is useful for eHEA driver to find out the memory layout and holes. NOTE: No special locking is needed for lmb_add() and lmb_remove(). Calls to these are serialized by caller. (pSeries_reconfig_chain). Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | bitops: remove "optimizations"Thomas Gleixner2008-04-291-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mapsize optimizations which were moved from x86 to the generic code in commit 64970b68d2b3ed32b964b0b30b1b98518fde388e increased the binary size on non x86 architectures. Looking into the real effects of the "optimizations" it turned out that they are not used in find_next_bit() and find_next_zero_bit(). The ones in find_first_bit() and find_first_zero_bit() are used in a couple of places but none of them is a real hot path. Remove the "optimizations" all together and call the library functions unconditionally. Boot-tested on x86 and compile tested on every cross compiler I have. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | idr: create idr_layer_cache at boot timeAkinobu Mita2008-04-291-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid a possible kmem_cache_create() failure by creating idr_layer_cache unconditionary at boot time rather than creating it on-demand when idr_init() is called the first time. This change also enables us to eliminate the check every time idr_init() is called. [akpm@linux-foundation.org: rename init_id_cache() to idr_init_cache()] [akpm@linux-foundation.org: fix alpha build] Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | dma/ia64: update ia64 machvecs, swiotlb.cArthur Kepner2008-04-291-8/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change all ia64 machvecs to use the new dma_*map*_attrs() interfaces. Implement the old dma_*map_*() interfaces in terms of the corresponding new interfaces. For ia64/sn, make use of one dma attribute, DMA_ATTR_WRITE_BARRIER. Introduce swiotlb_*map*_attrs() functions. Signed-off-by: Arthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | isolate ratelimit from printk.c for other useDave Young2008-04-292-1/+52
| | | | | | | | | | | | | | | | | | | | | | | | Due to the rcupreempt.h WARN_ON trigged, I got 2G syslog file. For some serious complaining of kernel, we need repeat the warnings, so here I isolate the ratelimit part of printk.c to a standalone file. Signed-off-by: Dave Young <hidave.darkstar@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | swiotlb: use iommu_is_span_boundary helper functionFUJITA Tomonori2008-04-291-11/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | iommu_is_span_boundary in lib/iommu-helper.c was exported for PARISC IOMMUs (commit 3715863aa142c4f4c5208f5f3e5e9bac06006d2f). SWIOTLB can use it instead of the homegrown function. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | lib/swiotlb.c: cleanupsAndrew Morton2008-04-291-46/+43
| | | | | | | | | | | | | | | | | | | | | | | | There's a pointlessly braced block of code in there. Remove the braces and save a tabstop. Cc: Andi Kleen <ak@suse.de> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jan Beulich <jbeulich@novell.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | iomap: fix 64 bits resources on 32 bitsBenjamin Herrenschmidt2008-04-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Almost all implementations of pci_iomap() in the kernel, including the generic lib/iomap.c one, copies the content of a struct resource into unsigned long's which will break on 32 bits platforms with 64 bits resources. This fixes all definitions of pci_iomap() to use resource_size_t. I also "fixed" the 64bits arch for consistency. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | lib/inflate.c: handle failed malloc()Jim Meyering2008-04-291-0/+3
|/ | | | | | | | lib/inflate.c (inflate_dynamic): Don't deref NULL upon failed malloc. Signed-off-by: Jim Meyering <meyering@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mempolicy: add bitmap_onto() and bitmap_fold() operationsPaul Jackson2008-04-281-0/+158
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following adds two more bitmap operators, bitmap_onto() and bitmap_fold(), with the usual cpumask and nodemask wrappers. The bitmap_onto() operator computes one bitmap relative to another. If the n-th bit in the origin mask is set, then the m-th bit of the destination mask will be set, where m is the position of the n-th set bit in the relative mask. The bitmap_fold() operator folds a bitmap into a second that has bit m set iff the input bitmap has some bit n set, where m == n mod sz, for the specified sz value. There are two substantive changes between this patch and its predecessor bitmap_relative: 1) Renamed bitmap_relative() to be bitmap_onto(). 2) Added bitmap_fold(). The essential motivation for bitmap_onto() is to provide a mechanism for converting a cpuset-relative CPU or Node mask to an absolute mask. Cpuset relative masks are written as if the current task were in a cpuset whose CPUs or Nodes were just the consecutive ones numbered 0..N-1, for some N. The bitmap_onto() operator is provided in anticipation of adding support for the first such cpuset relative mask, by the mbind() and set_mempolicy() system calls, using a planned flag of MPOL_F_RELATIVE_NODES. These bitmap operators (and their nodemask wrappers, in particular) will be used in code that converts the user specified cpuset relative memory policy to a specific system node numbered policy, given the current mems_allowed of the tasks cpuset. Such cpuset relative mempolicies will address two deficiencies of the existing interface between cpusets and mempolicies: 1) A task cannot at present reliably establish a cpuset relative mempolicy because there is an essential race condition, in that the tasks cpuset may be changed in between the time the task can query its cpuset placement, and the time the task can issue the applicable mbind or set_memplicy system call. 2) A task cannot at present establish what cpuset relative mempolicy it would like to have, if it is in a smaller cpuset than it might have mempolicy preferences for, because the existing interface only allows specifying mempolicies for nodes currently allowed by the cpuset. Cpuset relative mempolicies are useful for tasks that don't distinguish particularly between one CPU or Node and another, but only between how many of each are allowed, and the proper placement of threads and memory pages on the various CPUs and Nodes available. The motivation for the added bitmap_fold() can be seen in the following example. Let's say an application has specified some mempolicies that presume 16 memory nodes, including say a mempolicy that specified MPOL_F_RELATIVE_NODES (cpuset relative) nodes 12-15. Then lets say that application is crammed into a cpuset that only has 8 memory nodes, 0-7. If one just uses bitmap_onto(), this mempolicy, mapped to that cpuset, would ignore the requested relative nodes above 7, leaving it empty of nodes. That's not good; better to fold the higher nodes down, so that some nodes are included in the resulting mapped mempolicy. In this case, the mempolicy nodes 12-15 are taken modulo 8 (the weight of the mems_allowed of the confining cpuset), resulting in a mempolicy specifying nodes 4-7. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: <kosaki.motohiro@jp.fujitsu.com> Cc: <ray-lk@madrabbit.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Remove set_migrateflags()Christoph Lameter2008-04-281-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Migrate flags must be set on slab creation as agreed upon when the antifrag logic was reviewed. Otherwise some slabs of a slabcache will end up in the unmovable and others in the reclaimable section depending on which flag was active when a new slab page was allocated. This likely slid in somehow when antifrag was merged. Remove it. The buffer_heads are always allocated with __GFP_RECLAIMABLE because the SLAB_RECLAIM_ACCOUNT option is set. The set_migrateflags() never had any effect there. Radix tree allocations are not directly reclaimable but they are allocated with __GFP_RECLAIMABLE set on each allocation. We now set SLAB_RECLAIM_ACCOUNT on radix tree slab creation making sure that radix tree slabs are consistently placed in the reclaimable section. Radix tree slabs will also be accounted as such. There is then no user left of set_migratepages. So remove it. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86, bitops: select the generic bitmap search functionsAlexander van Heukelum2008-04-261-0/+6
| | | | | | | | | | | | | | | | | | Introduce GENERIC_FIND_FIRST_BIT and GENERIC_FIND_NEXT_BIT in lib/Kconfig, defaulting to off. An arch that wants to use the generic implementation now only has to use a select statement to include them. I added an always-y option (X86_CPU) to arch/x86/Kconfig.cpu and used that to select the generic search functions. This way ARCH=um SUBARCH=i386 automatically picks up the change too, and arch/um/Kconfig.i386 can therefore be simplified a bit. ARCH=um SUBARCH=x86_64 does things differently, but still compiles fine. It seems that a "def_bool y" always wins over a "def_bool n"? Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: generic versions of find_first_(zero_)bit, convert i386Alexander van Heukelum2008-04-262-0/+59
| | | | | | | | | | | | | | | | | | Generic versions of __find_first_bit and __find_first_zero_bit are introduced as simplified versions of __find_next_bit and __find_next_zero_bit. Their compilation and use are guarded by a new config variable GENERIC_FIND_FIRST_BIT. The generic versions of find_first_bit and find_first_zero_bit are implemented in terms of the newly introduced __find_first_bit and __find_first_zero_bit. This patch does not remove the i386-specific implementation, but it does switch i386 to use the generic functions by setting GENERIC_FIND_FIRST_BIT=y for X86_32. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, generic: optimize find_next_(zero_)bit for small constant-size bitmapsAlexander van Heukelum2008-04-261-16/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This moves an optimization for searching constant-sized small bitmaps form x86_64-specific to generic code. On an i386 defconfig (the x86#testing one), the size of vmlinux hardly changes with this applied. I have observed only four places where this optimization avoids a call into find_next_bit: In the functions return_unused_surplus_pages, alloc_fresh_huge_page, and adjust_pool_surplus, this patch avoids a call for a 1-bit bitmap. In __next_cpu a call is avoided for a 32-bit bitmap. That's it. On x86_64, 52 locations are optimized with a minimal increase in code size: Current #testing defconfig: 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392637 846592 724424 6963653 6a41c5 vmlinux After removing the x86_64 specific optimization for find_next_*bit: 94 x bsf, 79 x find_next_*bit text data bss dec hex filename 5392358 846592 724424 6963374 6a40ae vmlinux After this patch (making the optimization generic): 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392396 846592 724424 6963412 6a40d4 vmlinux [ tglx@linutronix.de: build fixes ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change x86 to use generic find_next_bitAlexander van Heukelum2008-04-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The versions with inline assembly are in fact slower on the machines I tested them on (in userspace) (Athlon XP 2800+, p4-like Xeon 2.8GHz, AMD Opteron 270). The i386-version needed a fix similar to 06024f21 to avoid crashing the benchmark. Benchmark using: gcc -fomit-frame-pointer -Os. For each bitmap size 1...512, for each possible bitmap with one bit set, for each possible offset: find the position of the first bit starting at offset. If you follow ;). Times include setup of the bitmap and checking of the results. Athlon Xeon Opteron 32/64bit x86-specific: 0m3.692s 0m2.820s 0m3.196s / 0m2.480s generic: 0m2.622s 0m1.662s 0m2.100s / 0m1.572s If the bitmap size is not a multiple of BITS_PER_LONG, and no set (cleared) bit is found, find_next_bit (find_next_zero_bit) returns a value outside of the range [0, size]. The generic version always returns exactly size. The generic version also uses unsigned long everywhere, while the x86 versions use a mishmash of int, unsigned (int), long and unsigned long. Using the generic version does give a slightly bigger kernel, though. defconfig: text data bss dec hex filename x86-specific: 4738555 481232 626688 5846475 5935cb vmlinux (32 bit) generic: 4738621 481232 626688 5846541 59360d vmlinux (32 bit) x86-specific: 5392395 846568 724424 6963387 6a40bb vmlinux (64 bit) generic: 5392458 846568 724424 6963450 6a40fa vmlinux (64 bit) Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Add option to enable -Wframe-larger-than= on gcc 4.4Andi Kleen2008-04-251-0/+11
| | | | | | | | | | | | | | | | | | | | | | Add option to enable -Wframe-larger-than= on gcc 4.4 gcc mainline (upcoming 4.4) added a new -Wframe-larger-than=... option to warn at build time about too large stack frames. Add a config option to enable this warning, since this very useful for the kernel. I choose (somewhat arbitarily) 2048 as default warning threshold for 64bit and 1024 as default for 32bit architectures. With some research and fixing all the code for smaller values these defaults should be probably lowered. With the default allyesconfigs have some new warnings, but I think that is all code that should be just fixed. At some point (when gcc 4.4 is released and widely used) this should obsolete make checkstack Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* [LMB]: Fix lmb allocation regression.David S. Miller2008-04-231-1/+1
| | | | | | | | | | | | | | Changeset d9024df02ffe74d723d97d552f86de3b34beb8cc ("[LMB] Restructure allocation loops to avoid unsigned underflow") removed the alignment of the 'size' argument to call lmb_add_region() done by __lmb_alloc_base(). In doing so it reintroduced the bug fixed by changeset eea89e13a9c61d3928223d2f9bf2295e22e0efb6 ("[LMB]: Fix bug in __lmb_alloc_base()."). This puts it back. Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵Linus Torvalds2008-04-213-0/+433
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (202 commits) [POWERPC] Fix compile breakage for 64-bit UP configs [POWERPC] Define copy_siginfo_from_user32 [POWERPC] Add compat handler for PTRACE_GETSIGINFO [POWERPC] i2c: Fix build breakage introduced by OF helpers [POWERPC] Optimize fls64() on 64-bit processors [POWERPC] irqtrace support for 64-bit powerpc [POWERPC] Stacktrace support for lockdep [POWERPC] Move stackframe definitions to common header [POWERPC] Fix device-tree locking vs. interrupts [POWERPC] Make pci_bus_to_host()'s struct pci_bus * argument const [POWERPC] Remove unused __max_memory variable [POWERPC] Simplify xics direct/lpar irq_host setup [POWERPC] Use pseries_setup_i8259_cascade() in pseries_mpic_init_IRQ() [POWERPC] Turn xics_setup_8259_cascade() into a generic pseries_setup_i8259_cascade() [POWERPC] Move xics_setup_8259_cascade() into platforms/pseries/setup.c [POWERPC] Use asm-generic/bitops/find.h in bitops.h [POWERPC] 83xx: mpc8315 - fix USB UTMI Host setup [POWERPC] 85xx: Fix the size of qe muram for MPC8568E [POWERPC] 86xx: mpc86xx_hpcn - Temporarily accept old dts node identifier. [POWERPC] 86xx: mark functions static, other minor cleanups ...
| * [LMB] Restructure allocation loops to avoid unsigned underflowPaul Mackerras2008-04-151-26/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a potential bug in __lmb_alloc_base where we subtract `size' from the base address of a reserved region without checking whether the subtraction could wrap around and produce a very large unsigned value. In fact it probably isn't possible to hit the bug in practice since it would only occur in the situation where we can't satisfy the allocation request and there is a reserved region starting at 0. This fixes the potential bug by breaking out of the loop when we get to the point where the base of the reserved region is less than the size requested. This also restructures the loop to be a bit easier to follow. The same logic got copied into lmb_alloc_nid_unreserved, so this makes a similar change there. Here the bug is more likely to be hit because the outer loop (in lmb_alloc_nid) goes through the memory regions in increasing order rather than decreasing order as __lmb_alloc_base does, and we are therefore more likely to hit the case where we are testing against a reserved region with a base address of 0. Signed-off-by: Paul Mackerras <paulus@samba.org>
OpenPOWER on IntegriCloud